00:00:00.001 Started by upstream project "autotest-per-patch" build number 132809 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.850 The recommended git tool is: git 00:00:02.850 using credential 00000000-0000-0000-0000-000000000002 00:00:02.852 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.863 Fetching changes from the remote Git repository 00:00:02.866 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.878 Using shallow fetch with depth 1 00:00:02.878 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.878 > git --version # timeout=10 00:00:02.889 > git --version # 'git version 2.39.2' 00:00:02.889 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.902 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.902 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.598 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.611 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.624 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.624 > git config core.sparsecheckout # timeout=10 00:00:08.637 > git read-tree -mu HEAD # timeout=10 00:00:08.659 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.677 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.677 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.795 [Pipeline] Start of Pipeline 00:00:08.808 [Pipeline] library 00:00:08.810 Loading library shm_lib@master 00:00:08.810 Library shm_lib@master is cached. Copying from home. 00:00:08.823 [Pipeline] node 00:00:08.836 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:08.838 [Pipeline] { 00:00:08.848 [Pipeline] catchError 00:00:08.849 [Pipeline] { 00:00:08.863 [Pipeline] wrap 00:00:08.872 [Pipeline] { 00:00:08.881 [Pipeline] stage 00:00:08.882 [Pipeline] { (Prologue) 00:00:08.902 [Pipeline] echo 00:00:08.904 Node: VM-host-WFP7 00:00:08.910 [Pipeline] cleanWs 00:00:08.921 [WS-CLEANUP] Deleting project workspace... 00:00:08.921 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.928 [WS-CLEANUP] done 00:00:09.190 [Pipeline] setCustomBuildProperty 00:00:09.265 [Pipeline] httpRequest 00:00:09.916 [Pipeline] echo 00:00:09.917 Sorcerer 10.211.164.112 is alive 00:00:09.923 [Pipeline] retry 00:00:09.924 [Pipeline] { 00:00:09.933 [Pipeline] httpRequest 00:00:09.937 HttpMethod: GET 00:00:09.937 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.938 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.956 Response Code: HTTP/1.1 200 OK 00:00:09.956 Success: Status code 200 is in the accepted range: 200,404 00:00:09.957 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.322 [Pipeline] } 00:00:14.333 [Pipeline] // retry 00:00:14.337 [Pipeline] sh 00:00:14.619 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.635 [Pipeline] httpRequest 00:00:15.066 [Pipeline] echo 00:00:15.068 Sorcerer 10.211.164.112 is alive 00:00:15.077 [Pipeline] retry 00:00:15.079 [Pipeline] { 00:00:15.093 [Pipeline] httpRequest 00:00:15.098 HttpMethod: GET 00:00:15.098 URL: http://10.211.164.112/packages/spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz 00:00:15.099 Sending request to url: http://10.211.164.112/packages/spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz 00:00:15.117 Response Code: HTTP/1.1 200 OK 00:00:15.117 Success: Status code 200 is in the accepted range: 200,404 00:00:15.118 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz 00:06:29.377 [Pipeline] } 00:06:29.388 [Pipeline] // retry 00:06:29.393 [Pipeline] sh 00:06:29.670 + tar --no-same-owner -xf spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz 00:06:32.217 [Pipeline] sh 00:06:32.499 + git -C spdk log --oneline -n5 00:06:32.500 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:06:32.500 1ae735a5d nvme: add poll_group interrupt callback 00:06:32.500 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:06:32.500 969b360d9 thread: fd_group-based interrupts 00:06:32.500 851f166ec thread: move interrupt allocation to a function 00:06:32.520 [Pipeline] writeFile 00:06:32.535 [Pipeline] sh 00:06:32.820 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:32.831 [Pipeline] sh 00:06:33.115 + cat autorun-spdk.conf 00:06:33.115 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:33.115 SPDK_RUN_ASAN=1 00:06:33.115 SPDK_RUN_UBSAN=1 00:06:33.115 SPDK_TEST_RAID=1 00:06:33.115 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:33.122 RUN_NIGHTLY=0 00:06:33.124 [Pipeline] } 00:06:33.139 [Pipeline] // stage 00:06:33.154 [Pipeline] stage 00:06:33.156 [Pipeline] { (Run VM) 00:06:33.167 [Pipeline] sh 00:06:33.511 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:33.511 + echo 'Start stage prepare_nvme.sh' 00:06:33.511 Start stage prepare_nvme.sh 00:06:33.511 + [[ -n 4 ]] 00:06:33.511 + disk_prefix=ex4 00:06:33.511 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:06:33.511 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:06:33.511 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:06:33.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:33.511 ++ SPDK_RUN_ASAN=1 00:06:33.511 ++ SPDK_RUN_UBSAN=1 00:06:33.511 ++ SPDK_TEST_RAID=1 00:06:33.511 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:33.511 ++ RUN_NIGHTLY=0 00:06:33.511 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:06:33.511 + nvme_files=() 00:06:33.511 + declare -A nvme_files 00:06:33.511 + backend_dir=/var/lib/libvirt/images/backends 00:06:33.511 + nvme_files['nvme.img']=5G 00:06:33.511 + nvme_files['nvme-cmb.img']=5G 00:06:33.511 + nvme_files['nvme-multi0.img']=4G 00:06:33.511 + nvme_files['nvme-multi1.img']=4G 00:06:33.511 + nvme_files['nvme-multi2.img']=4G 00:06:33.511 + nvme_files['nvme-openstack.img']=8G 00:06:33.511 + nvme_files['nvme-zns.img']=5G 00:06:33.511 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:33.511 + (( SPDK_TEST_FTL == 1 )) 00:06:33.511 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:33.511 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:06:33.511 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:06:33.511 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:06:33.511 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:06:33.511 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:06:33.511 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:06:33.511 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:33.511 + for nvme in "${!nvme_files[@]}" 00:06:33.511 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:06:33.769 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:33.769 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:06:33.769 + echo 'End stage prepare_nvme.sh' 00:06:33.769 End stage prepare_nvme.sh 00:06:33.781 [Pipeline] sh 00:06:34.063 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:34.063 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:06:34.063 00:06:34.063 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:06:34.063 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:06:34.063 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:06:34.063 HELP=0 00:06:34.063 DRY_RUN=0 00:06:34.063 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:06:34.063 NVME_DISKS_TYPE=nvme,nvme, 00:06:34.063 NVME_AUTO_CREATE=0 00:06:34.063 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:06:34.063 NVME_CMB=,, 00:06:34.063 NVME_PMR=,, 00:06:34.063 NVME_ZNS=,, 00:06:34.063 NVME_MS=,, 00:06:34.063 NVME_FDP=,, 00:06:34.063 SPDK_VAGRANT_DISTRO=fedora39 00:06:34.063 SPDK_VAGRANT_VMCPU=10 00:06:34.063 SPDK_VAGRANT_VMRAM=12288 00:06:34.063 SPDK_VAGRANT_PROVIDER=libvirt 00:06:34.063 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:34.063 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:34.063 SPDK_OPENSTACK_NETWORK=0 00:06:34.063 VAGRANT_PACKAGE_BOX=0 00:06:34.063 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:06:34.063 FORCE_DISTRO=true 00:06:34.063 VAGRANT_BOX_VERSION= 00:06:34.063 EXTRA_VAGRANTFILES= 00:06:34.063 NIC_MODEL=virtio 00:06:34.063 00:06:34.063 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:06:34.063 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:06:35.966 Bringing machine 'default' up with 'libvirt' provider... 00:06:36.535 ==> default: Creating image (snapshot of base box volume). 00:06:36.793 ==> default: Creating domain with the following settings... 00:06:36.793 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733784472_ec7efe680090717411b1 00:06:36.793 ==> default: -- Domain type: kvm 00:06:36.793 ==> default: -- Cpus: 10 00:06:36.793 ==> default: -- Feature: acpi 00:06:36.793 ==> default: -- Feature: apic 00:06:36.793 ==> default: -- Feature: pae 00:06:36.793 ==> default: -- Memory: 12288M 00:06:36.793 ==> default: -- Memory Backing: hugepages: 00:06:36.794 ==> default: -- Management MAC: 00:06:36.794 ==> default: -- Loader: 00:06:36.794 ==> default: -- Nvram: 00:06:36.794 ==> default: -- Base box: spdk/fedora39 00:06:36.794 ==> default: -- Storage pool: default 00:06:36.794 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733784472_ec7efe680090717411b1.img (20G) 00:06:36.794 ==> default: -- Volume Cache: default 00:06:36.794 ==> default: -- Kernel: 00:06:36.794 ==> default: -- Initrd: 00:06:36.794 ==> default: -- Graphics Type: vnc 00:06:36.794 ==> default: -- Graphics Port: -1 00:06:36.794 ==> default: -- Graphics IP: 127.0.0.1 00:06:36.794 ==> default: -- Graphics Password: Not defined 00:06:36.794 ==> default: -- Video Type: cirrus 00:06:36.794 ==> default: -- Video VRAM: 9216 00:06:36.794 ==> default: -- Sound Type: 00:06:36.794 ==> default: -- Keymap: en-us 00:06:36.794 ==> default: -- TPM Path: 00:06:36.794 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:36.794 ==> default: -- Command line args: 00:06:36.794 ==> default: -> value=-device, 00:06:36.794 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:36.794 ==> default: -> value=-drive, 00:06:36.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:06:36.794 ==> default: -> value=-device, 00:06:36.794 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:36.794 ==> default: -> value=-device, 00:06:36.794 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:36.794 ==> default: -> value=-drive, 00:06:36.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:06:36.794 ==> default: -> value=-device, 00:06:36.794 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:36.794 ==> default: -> value=-drive, 00:06:36.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:06:36.794 ==> default: -> value=-device, 00:06:36.794 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:36.794 ==> default: -> value=-drive, 00:06:36.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:06:36.794 ==> default: -> value=-device, 00:06:36.794 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:36.794 ==> default: Creating shared folders metadata... 00:06:36.794 ==> default: Starting domain. 00:06:38.171 ==> default: Waiting for domain to get an IP address... 00:07:00.121 ==> default: Waiting for SSH to become available... 00:07:00.121 ==> default: Configuring and enabling network interfaces... 00:07:04.312 default: SSH address: 192.168.121.161:22 00:07:04.312 default: SSH username: vagrant 00:07:04.312 default: SSH auth method: private key 00:07:06.849 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:16.826 ==> default: Mounting SSHFS shared folder... 00:07:17.395 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:07:17.395 ==> default: Checking Mount.. 00:07:19.299 ==> default: Folder Successfully Mounted! 00:07:19.299 ==> default: Running provisioner: file... 00:07:19.867 default: ~/.gitconfig => .gitconfig 00:07:20.439 00:07:20.439 SUCCESS! 00:07:20.439 00:07:20.439 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:07:20.439 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:20.439 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:07:20.439 00:07:20.462 [Pipeline] } 00:07:20.470 [Pipeline] // stage 00:07:20.475 [Pipeline] dir 00:07:20.475 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:07:20.476 [Pipeline] { 00:07:20.483 [Pipeline] catchError 00:07:20.484 [Pipeline] { 00:07:20.490 [Pipeline] sh 00:07:20.780 + vagrant ssh-config --host vagrant 00:07:20.780 + sed -ne /^Host/,$p 00:07:20.780 + tee ssh_conf 00:07:24.067 Host vagrant 00:07:24.067 HostName 192.168.121.161 00:07:24.067 User vagrant 00:07:24.067 Port 22 00:07:24.067 UserKnownHostsFile /dev/null 00:07:24.067 StrictHostKeyChecking no 00:07:24.067 PasswordAuthentication no 00:07:24.067 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:07:24.067 IdentitiesOnly yes 00:07:24.067 LogLevel FATAL 00:07:24.067 ForwardAgent yes 00:07:24.067 ForwardX11 yes 00:07:24.067 00:07:24.079 [Pipeline] withEnv 00:07:24.081 [Pipeline] { 00:07:24.093 [Pipeline] sh 00:07:24.374 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:24.374 source /etc/os-release 00:07:24.374 [[ -e /image.version ]] && img=$(< /image.version) 00:07:24.374 # Minimal, systemd-like check. 00:07:24.374 if [[ -e /.dockerenv ]]; then 00:07:24.374 # Clear garbage from the node's name: 00:07:24.374 # agt-er_autotest_547-896 -> autotest_547-896 00:07:24.374 # $HOSTNAME is the actual container id 00:07:24.374 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:24.374 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:07:24.374 # We can assume this is a mount from a host where container is running, 00:07:24.374 # so fetch its hostname to easily identify the target swarm worker. 00:07:24.374 container="$(< /etc/hostname) ($agent)" 00:07:24.374 else 00:07:24.374 # Fallback 00:07:24.374 container=$agent 00:07:24.374 fi 00:07:24.374 fi 00:07:24.374 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:24.374 00:07:24.647 [Pipeline] } 00:07:24.663 [Pipeline] // withEnv 00:07:24.671 [Pipeline] setCustomBuildProperty 00:07:24.684 [Pipeline] stage 00:07:24.685 [Pipeline] { (Tests) 00:07:24.701 [Pipeline] sh 00:07:24.984 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:25.255 [Pipeline] sh 00:07:25.540 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:07:25.812 [Pipeline] timeout 00:07:25.812 Timeout set to expire in 1 hr 30 min 00:07:25.814 [Pipeline] { 00:07:25.829 [Pipeline] sh 00:07:26.111 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:26.795 HEAD is now at 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:07:26.808 [Pipeline] sh 00:07:27.089 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:27.359 [Pipeline] sh 00:07:27.640 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:27.913 [Pipeline] sh 00:07:28.192 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:07:28.451 ++ readlink -f spdk_repo 00:07:28.451 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:28.451 + [[ -n /home/vagrant/spdk_repo ]] 00:07:28.451 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:28.451 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:28.451 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:28.451 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:28.451 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:28.451 + [[ raid-vg-autotest == pkgdep-* ]] 00:07:28.451 + cd /home/vagrant/spdk_repo 00:07:28.451 + source /etc/os-release 00:07:28.451 ++ NAME='Fedora Linux' 00:07:28.451 ++ VERSION='39 (Cloud Edition)' 00:07:28.451 ++ ID=fedora 00:07:28.451 ++ VERSION_ID=39 00:07:28.451 ++ VERSION_CODENAME= 00:07:28.451 ++ PLATFORM_ID=platform:f39 00:07:28.451 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:28.451 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:28.451 ++ LOGO=fedora-logo-icon 00:07:28.451 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:28.451 ++ HOME_URL=https://fedoraproject.org/ 00:07:28.451 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:28.451 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:28.451 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:28.451 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:28.451 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:28.451 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:28.451 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:28.451 ++ SUPPORT_END=2024-11-12 00:07:28.451 ++ VARIANT='Cloud Edition' 00:07:28.451 ++ VARIANT_ID=cloud 00:07:28.451 + uname -a 00:07:28.451 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:28.451 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:29.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.020 Hugepages 00:07:29.020 node hugesize free / total 00:07:29.020 node0 1048576kB 0 / 0 00:07:29.020 node0 2048kB 0 / 0 00:07:29.020 00:07:29.020 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:29.020 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:29.020 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:29.020 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:29.020 + rm -f /tmp/spdk-ld-path 00:07:29.020 + source autorun-spdk.conf 00:07:29.020 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:29.020 ++ SPDK_RUN_ASAN=1 00:07:29.020 ++ SPDK_RUN_UBSAN=1 00:07:29.020 ++ SPDK_TEST_RAID=1 00:07:29.020 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:29.020 ++ RUN_NIGHTLY=0 00:07:29.020 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:29.020 + [[ -n '' ]] 00:07:29.020 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:29.020 + for M in /var/spdk/build-*-manifest.txt 00:07:29.020 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:29.020 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:29.020 + for M in /var/spdk/build-*-manifest.txt 00:07:29.020 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:29.020 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:29.020 + for M in /var/spdk/build-*-manifest.txt 00:07:29.020 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:29.020 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:29.020 ++ uname 00:07:29.020 + [[ Linux == \L\i\n\u\x ]] 00:07:29.020 + sudo dmesg -T 00:07:29.020 + sudo dmesg --clear 00:07:29.020 + dmesg_pid=5432 00:07:29.020 + sudo dmesg -Tw 00:07:29.020 + [[ Fedora Linux == FreeBSD ]] 00:07:29.020 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.020 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.020 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:29.020 + [[ -x /usr/src/fio-static/fio ]] 00:07:29.020 + export FIO_BIN=/usr/src/fio-static/fio 00:07:29.020 + FIO_BIN=/usr/src/fio-static/fio 00:07:29.020 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:29.020 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:29.021 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:29.021 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.021 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.021 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:29.021 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.021 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.021 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:29.281 22:48:44 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:29.281 22:48:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:29.281 22:48:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:29.281 22:48:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:07:29.281 22:48:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:07:29.281 22:48:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:07:29.281 22:48:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:29.281 22:48:44 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:07:29.281 22:48:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:29.281 22:48:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:29.281 22:48:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:29.281 22:48:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.281 22:48:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:29.281 22:48:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:29.281 22:48:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.281 22:48:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.281 22:48:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.281 22:48:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.281 22:48:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.281 22:48:45 -- paths/export.sh@5 -- $ export PATH 00:07:29.281 22:48:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.281 22:48:45 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:29.281 22:48:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:29.281 22:48:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784525.XXXXXX 00:07:29.281 22:48:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784525.hB9fu8 00:07:29.281 22:48:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:29.281 22:48:45 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:29.281 22:48:45 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:07:29.281 22:48:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:29.281 22:48:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:29.281 22:48:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:29.281 22:48:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:29.281 22:48:45 -- common/autotest_common.sh@10 -- $ set +x 00:07:29.281 22:48:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:07:29.281 22:48:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:29.282 22:48:45 -- pm/common@17 -- $ local monitor 00:07:29.282 22:48:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.282 22:48:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.282 22:48:45 -- pm/common@25 -- $ sleep 1 00:07:29.282 22:48:45 -- pm/common@21 -- $ date +%s 00:07:29.282 22:48:45 -- pm/common@21 -- $ date +%s 00:07:29.282 22:48:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733784525 00:07:29.282 22:48:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733784525 00:07:29.282 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733784525_collect-cpu-load.pm.log 00:07:29.282 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733784525_collect-vmstat.pm.log 00:07:30.663 22:48:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:30.663 22:48:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:30.663 22:48:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:30.664 22:48:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:30.664 22:48:46 -- spdk/autobuild.sh@16 -- $ date -u 00:07:30.664 Mon Dec 9 10:48:46 PM UTC 2024 00:07:30.664 22:48:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:30.664 v25.01-pre-321-g06358c250 00:07:30.664 22:48:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:07:30.664 22:48:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:07:30.664 22:48:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:30.664 22:48:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:30.664 22:48:46 -- common/autotest_common.sh@10 -- $ set +x 00:07:30.664 ************************************ 00:07:30.664 START TEST asan 00:07:30.664 ************************************ 00:07:30.664 using asan 00:07:30.664 22:48:46 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:07:30.664 00:07:30.664 real 0m0.000s 00:07:30.664 user 0m0.000s 00:07:30.664 sys 0m0.000s 00:07:30.664 22:48:46 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:30.664 22:48:46 asan -- common/autotest_common.sh@10 -- $ set +x 00:07:30.664 ************************************ 00:07:30.664 END TEST asan 00:07:30.664 ************************************ 00:07:30.664 22:48:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:30.664 22:48:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:30.664 22:48:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:30.664 22:48:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:30.664 22:48:46 -- common/autotest_common.sh@10 -- $ set +x 00:07:30.664 ************************************ 00:07:30.664 START TEST ubsan 00:07:30.664 ************************************ 00:07:30.664 using ubsan 00:07:30.664 22:48:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:30.664 00:07:30.664 real 0m0.000s 00:07:30.664 user 0m0.000s 00:07:30.664 sys 0m0.000s 00:07:30.664 22:48:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:30.664 22:48:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:30.664 ************************************ 00:07:30.664 END TEST ubsan 00:07:30.664 ************************************ 00:07:30.664 22:48:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:30.664 22:48:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:30.664 22:48:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:30.664 22:48:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:30.664 22:48:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:30.664 22:48:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:30.664 22:48:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:30.664 22:48:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:30.664 22:48:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:07:30.664 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:30.664 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:31.263 Using 'verbs' RDMA provider 00:07:47.129 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:02.006 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:02.006 Creating mk/config.mk...done. 00:08:02.006 Creating mk/cc.flags.mk...done. 00:08:02.006 Type 'make' to build. 00:08:02.006 22:49:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:08:02.006 22:49:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:02.006 22:49:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:02.006 22:49:17 -- common/autotest_common.sh@10 -- $ set +x 00:08:02.006 ************************************ 00:08:02.006 START TEST make 00:08:02.006 ************************************ 00:08:02.006 22:49:17 make -- common/autotest_common.sh@1129 -- $ make -j10 00:08:02.006 make[1]: Nothing to be done for 'all'. 00:08:14.235 The Meson build system 00:08:14.235 Version: 1.5.0 00:08:14.235 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:08:14.235 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:08:14.235 Build type: native build 00:08:14.235 Program cat found: YES (/usr/bin/cat) 00:08:14.235 Project name: DPDK 00:08:14.235 Project version: 24.03.0 00:08:14.235 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:14.235 C linker for the host machine: cc ld.bfd 2.40-14 00:08:14.235 Host machine cpu family: x86_64 00:08:14.235 Host machine cpu: x86_64 00:08:14.235 Message: ## Building in Developer Mode ## 00:08:14.235 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:14.235 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:08:14.235 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:14.235 Program python3 found: YES (/usr/bin/python3) 00:08:14.235 Program cat found: YES (/usr/bin/cat) 00:08:14.235 Compiler for C supports arguments -march=native: YES 00:08:14.235 Checking for size of "void *" : 8 00:08:14.235 Checking for size of "void *" : 8 (cached) 00:08:14.235 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:14.235 Library m found: YES 00:08:14.235 Library numa found: YES 00:08:14.235 Has header "numaif.h" : YES 00:08:14.235 Library fdt found: NO 00:08:14.235 Library execinfo found: NO 00:08:14.235 Has header "execinfo.h" : YES 00:08:14.235 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:14.235 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:14.235 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:14.235 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:14.235 Run-time dependency openssl found: YES 3.1.1 00:08:14.235 Run-time dependency libpcap found: YES 1.10.4 00:08:14.235 Has header "pcap.h" with dependency libpcap: YES 00:08:14.235 Compiler for C supports arguments -Wcast-qual: YES 00:08:14.235 Compiler for C supports arguments -Wdeprecated: YES 00:08:14.235 Compiler for C supports arguments -Wformat: YES 00:08:14.235 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:14.235 Compiler for C supports arguments -Wformat-security: NO 00:08:14.235 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:14.235 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:14.235 Compiler for C supports arguments -Wnested-externs: YES 00:08:14.235 Compiler for C supports arguments -Wold-style-definition: YES 00:08:14.235 Compiler for C supports arguments -Wpointer-arith: YES 00:08:14.235 Compiler for C supports arguments -Wsign-compare: YES 00:08:14.235 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:14.235 Compiler for C supports arguments -Wundef: YES 00:08:14.235 Compiler for C supports arguments -Wwrite-strings: YES 00:08:14.235 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:14.235 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:14.235 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:14.235 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:14.235 Program objdump found: YES (/usr/bin/objdump) 00:08:14.235 Compiler for C supports arguments -mavx512f: YES 00:08:14.235 Checking if "AVX512 checking" compiles: YES 00:08:14.235 Fetching value of define "__SSE4_2__" : 1 00:08:14.235 Fetching value of define "__AES__" : 1 00:08:14.235 Fetching value of define "__AVX__" : 1 00:08:14.235 Fetching value of define "__AVX2__" : 1 00:08:14.235 Fetching value of define "__AVX512BW__" : 1 00:08:14.235 Fetching value of define "__AVX512CD__" : 1 00:08:14.235 Fetching value of define "__AVX512DQ__" : 1 00:08:14.235 Fetching value of define "__AVX512F__" : 1 00:08:14.235 Fetching value of define "__AVX512VL__" : 1 00:08:14.235 Fetching value of define "__PCLMUL__" : 1 00:08:14.235 Fetching value of define "__RDRND__" : 1 00:08:14.235 Fetching value of define "__RDSEED__" : 1 00:08:14.235 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:14.235 Fetching value of define "__znver1__" : (undefined) 00:08:14.235 Fetching value of define "__znver2__" : (undefined) 00:08:14.235 Fetching value of define "__znver3__" : (undefined) 00:08:14.235 Fetching value of define "__znver4__" : (undefined) 00:08:14.235 Library asan found: YES 00:08:14.235 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:14.235 Message: lib/log: Defining dependency "log" 00:08:14.235 Message: lib/kvargs: Defining dependency "kvargs" 00:08:14.235 Message: lib/telemetry: Defining dependency "telemetry" 00:08:14.235 Library rt found: YES 00:08:14.235 Checking for function "getentropy" : NO 00:08:14.235 Message: lib/eal: Defining dependency "eal" 00:08:14.235 Message: lib/ring: Defining dependency "ring" 00:08:14.235 Message: lib/rcu: Defining dependency "rcu" 00:08:14.235 Message: lib/mempool: Defining dependency "mempool" 00:08:14.235 Message: lib/mbuf: Defining dependency "mbuf" 00:08:14.235 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:14.235 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:14.235 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:14.235 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:14.235 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:14.235 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:08:14.235 Compiler for C supports arguments -mpclmul: YES 00:08:14.235 Compiler for C supports arguments -maes: YES 00:08:14.235 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:14.235 Compiler for C supports arguments -mavx512bw: YES 00:08:14.235 Compiler for C supports arguments -mavx512dq: YES 00:08:14.236 Compiler for C supports arguments -mavx512vl: YES 00:08:14.236 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:14.236 Compiler for C supports arguments -mavx2: YES 00:08:14.236 Compiler for C supports arguments -mavx: YES 00:08:14.236 Message: lib/net: Defining dependency "net" 00:08:14.236 Message: lib/meter: Defining dependency "meter" 00:08:14.236 Message: lib/ethdev: Defining dependency "ethdev" 00:08:14.236 Message: lib/pci: Defining dependency "pci" 00:08:14.236 Message: lib/cmdline: Defining dependency "cmdline" 00:08:14.236 Message: lib/hash: Defining dependency "hash" 00:08:14.236 Message: lib/timer: Defining dependency "timer" 00:08:14.236 Message: lib/compressdev: Defining dependency "compressdev" 00:08:14.236 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:14.236 Message: lib/dmadev: Defining dependency "dmadev" 00:08:14.236 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:14.236 Message: lib/power: Defining dependency "power" 00:08:14.236 Message: lib/reorder: Defining dependency "reorder" 00:08:14.236 Message: lib/security: Defining dependency "security" 00:08:14.236 Has header "linux/userfaultfd.h" : YES 00:08:14.236 Has header "linux/vduse.h" : YES 00:08:14.236 Message: lib/vhost: Defining dependency "vhost" 00:08:14.236 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:14.236 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:14.236 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:14.236 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:14.236 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:14.236 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:14.236 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:14.236 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:14.236 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:14.236 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:14.236 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:14.236 Configuring doxy-api-html.conf using configuration 00:08:14.236 Configuring doxy-api-man.conf using configuration 00:08:14.236 Program mandb found: YES (/usr/bin/mandb) 00:08:14.236 Program sphinx-build found: NO 00:08:14.236 Configuring rte_build_config.h using configuration 00:08:14.236 Message: 00:08:14.236 ================= 00:08:14.236 Applications Enabled 00:08:14.236 ================= 00:08:14.236 00:08:14.236 apps: 00:08:14.236 00:08:14.236 00:08:14.236 Message: 00:08:14.236 ================= 00:08:14.236 Libraries Enabled 00:08:14.236 ================= 00:08:14.236 00:08:14.236 libs: 00:08:14.236 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:14.236 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:14.236 cryptodev, dmadev, power, reorder, security, vhost, 00:08:14.236 00:08:14.236 Message: 00:08:14.236 =============== 00:08:14.236 Drivers Enabled 00:08:14.236 =============== 00:08:14.236 00:08:14.236 common: 00:08:14.236 00:08:14.236 bus: 00:08:14.236 pci, vdev, 00:08:14.236 mempool: 00:08:14.236 ring, 00:08:14.236 dma: 00:08:14.236 00:08:14.236 net: 00:08:14.236 00:08:14.236 crypto: 00:08:14.236 00:08:14.236 compress: 00:08:14.236 00:08:14.236 vdpa: 00:08:14.236 00:08:14.236 00:08:14.236 Message: 00:08:14.236 ================= 00:08:14.236 Content Skipped 00:08:14.236 ================= 00:08:14.236 00:08:14.236 apps: 00:08:14.236 dumpcap: explicitly disabled via build config 00:08:14.236 graph: explicitly disabled via build config 00:08:14.236 pdump: explicitly disabled via build config 00:08:14.236 proc-info: explicitly disabled via build config 00:08:14.236 test-acl: explicitly disabled via build config 00:08:14.236 test-bbdev: explicitly disabled via build config 00:08:14.236 test-cmdline: explicitly disabled via build config 00:08:14.236 test-compress-perf: explicitly disabled via build config 00:08:14.236 test-crypto-perf: explicitly disabled via build config 00:08:14.236 test-dma-perf: explicitly disabled via build config 00:08:14.236 test-eventdev: explicitly disabled via build config 00:08:14.236 test-fib: explicitly disabled via build config 00:08:14.236 test-flow-perf: explicitly disabled via build config 00:08:14.236 test-gpudev: explicitly disabled via build config 00:08:14.236 test-mldev: explicitly disabled via build config 00:08:14.236 test-pipeline: explicitly disabled via build config 00:08:14.236 test-pmd: explicitly disabled via build config 00:08:14.236 test-regex: explicitly disabled via build config 00:08:14.236 test-sad: explicitly disabled via build config 00:08:14.236 test-security-perf: explicitly disabled via build config 00:08:14.236 00:08:14.236 libs: 00:08:14.236 argparse: explicitly disabled via build config 00:08:14.236 metrics: explicitly disabled via build config 00:08:14.236 acl: explicitly disabled via build config 00:08:14.236 bbdev: explicitly disabled via build config 00:08:14.236 bitratestats: explicitly disabled via build config 00:08:14.236 bpf: explicitly disabled via build config 00:08:14.236 cfgfile: explicitly disabled via build config 00:08:14.236 distributor: explicitly disabled via build config 00:08:14.236 efd: explicitly disabled via build config 00:08:14.236 eventdev: explicitly disabled via build config 00:08:14.236 dispatcher: explicitly disabled via build config 00:08:14.236 gpudev: explicitly disabled via build config 00:08:14.236 gro: explicitly disabled via build config 00:08:14.236 gso: explicitly disabled via build config 00:08:14.236 ip_frag: explicitly disabled via build config 00:08:14.236 jobstats: explicitly disabled via build config 00:08:14.236 latencystats: explicitly disabled via build config 00:08:14.236 lpm: explicitly disabled via build config 00:08:14.236 member: explicitly disabled via build config 00:08:14.236 pcapng: explicitly disabled via build config 00:08:14.236 rawdev: explicitly disabled via build config 00:08:14.236 regexdev: explicitly disabled via build config 00:08:14.236 mldev: explicitly disabled via build config 00:08:14.236 rib: explicitly disabled via build config 00:08:14.236 sched: explicitly disabled via build config 00:08:14.236 stack: explicitly disabled via build config 00:08:14.236 ipsec: explicitly disabled via build config 00:08:14.236 pdcp: explicitly disabled via build config 00:08:14.236 fib: explicitly disabled via build config 00:08:14.236 port: explicitly disabled via build config 00:08:14.236 pdump: explicitly disabled via build config 00:08:14.236 table: explicitly disabled via build config 00:08:14.236 pipeline: explicitly disabled via build config 00:08:14.236 graph: explicitly disabled via build config 00:08:14.236 node: explicitly disabled via build config 00:08:14.236 00:08:14.236 drivers: 00:08:14.236 common/cpt: not in enabled drivers build config 00:08:14.236 common/dpaax: not in enabled drivers build config 00:08:14.236 common/iavf: not in enabled drivers build config 00:08:14.236 common/idpf: not in enabled drivers build config 00:08:14.236 common/ionic: not in enabled drivers build config 00:08:14.236 common/mvep: not in enabled drivers build config 00:08:14.236 common/octeontx: not in enabled drivers build config 00:08:14.236 bus/auxiliary: not in enabled drivers build config 00:08:14.236 bus/cdx: not in enabled drivers build config 00:08:14.236 bus/dpaa: not in enabled drivers build config 00:08:14.236 bus/fslmc: not in enabled drivers build config 00:08:14.236 bus/ifpga: not in enabled drivers build config 00:08:14.236 bus/platform: not in enabled drivers build config 00:08:14.236 bus/uacce: not in enabled drivers build config 00:08:14.236 bus/vmbus: not in enabled drivers build config 00:08:14.236 common/cnxk: not in enabled drivers build config 00:08:14.236 common/mlx5: not in enabled drivers build config 00:08:14.236 common/nfp: not in enabled drivers build config 00:08:14.236 common/nitrox: not in enabled drivers build config 00:08:14.236 common/qat: not in enabled drivers build config 00:08:14.236 common/sfc_efx: not in enabled drivers build config 00:08:14.236 mempool/bucket: not in enabled drivers build config 00:08:14.236 mempool/cnxk: not in enabled drivers build config 00:08:14.236 mempool/dpaa: not in enabled drivers build config 00:08:14.236 mempool/dpaa2: not in enabled drivers build config 00:08:14.236 mempool/octeontx: not in enabled drivers build config 00:08:14.236 mempool/stack: not in enabled drivers build config 00:08:14.236 dma/cnxk: not in enabled drivers build config 00:08:14.236 dma/dpaa: not in enabled drivers build config 00:08:14.236 dma/dpaa2: not in enabled drivers build config 00:08:14.236 dma/hisilicon: not in enabled drivers build config 00:08:14.236 dma/idxd: not in enabled drivers build config 00:08:14.236 dma/ioat: not in enabled drivers build config 00:08:14.236 dma/skeleton: not in enabled drivers build config 00:08:14.236 net/af_packet: not in enabled drivers build config 00:08:14.236 net/af_xdp: not in enabled drivers build config 00:08:14.236 net/ark: not in enabled drivers build config 00:08:14.236 net/atlantic: not in enabled drivers build config 00:08:14.236 net/avp: not in enabled drivers build config 00:08:14.236 net/axgbe: not in enabled drivers build config 00:08:14.236 net/bnx2x: not in enabled drivers build config 00:08:14.236 net/bnxt: not in enabled drivers build config 00:08:14.236 net/bonding: not in enabled drivers build config 00:08:14.236 net/cnxk: not in enabled drivers build config 00:08:14.236 net/cpfl: not in enabled drivers build config 00:08:14.236 net/cxgbe: not in enabled drivers build config 00:08:14.236 net/dpaa: not in enabled drivers build config 00:08:14.236 net/dpaa2: not in enabled drivers build config 00:08:14.236 net/e1000: not in enabled drivers build config 00:08:14.236 net/ena: not in enabled drivers build config 00:08:14.236 net/enetc: not in enabled drivers build config 00:08:14.236 net/enetfec: not in enabled drivers build config 00:08:14.236 net/enic: not in enabled drivers build config 00:08:14.237 net/failsafe: not in enabled drivers build config 00:08:14.237 net/fm10k: not in enabled drivers build config 00:08:14.237 net/gve: not in enabled drivers build config 00:08:14.237 net/hinic: not in enabled drivers build config 00:08:14.237 net/hns3: not in enabled drivers build config 00:08:14.237 net/i40e: not in enabled drivers build config 00:08:14.237 net/iavf: not in enabled drivers build config 00:08:14.237 net/ice: not in enabled drivers build config 00:08:14.237 net/idpf: not in enabled drivers build config 00:08:14.237 net/igc: not in enabled drivers build config 00:08:14.237 net/ionic: not in enabled drivers build config 00:08:14.237 net/ipn3ke: not in enabled drivers build config 00:08:14.237 net/ixgbe: not in enabled drivers build config 00:08:14.237 net/mana: not in enabled drivers build config 00:08:14.237 net/memif: not in enabled drivers build config 00:08:14.237 net/mlx4: not in enabled drivers build config 00:08:14.237 net/mlx5: not in enabled drivers build config 00:08:14.237 net/mvneta: not in enabled drivers build config 00:08:14.237 net/mvpp2: not in enabled drivers build config 00:08:14.237 net/netvsc: not in enabled drivers build config 00:08:14.237 net/nfb: not in enabled drivers build config 00:08:14.237 net/nfp: not in enabled drivers build config 00:08:14.237 net/ngbe: not in enabled drivers build config 00:08:14.237 net/null: not in enabled drivers build config 00:08:14.237 net/octeontx: not in enabled drivers build config 00:08:14.237 net/octeon_ep: not in enabled drivers build config 00:08:14.237 net/pcap: not in enabled drivers build config 00:08:14.237 net/pfe: not in enabled drivers build config 00:08:14.237 net/qede: not in enabled drivers build config 00:08:14.237 net/ring: not in enabled drivers build config 00:08:14.237 net/sfc: not in enabled drivers build config 00:08:14.237 net/softnic: not in enabled drivers build config 00:08:14.237 net/tap: not in enabled drivers build config 00:08:14.237 net/thunderx: not in enabled drivers build config 00:08:14.237 net/txgbe: not in enabled drivers build config 00:08:14.237 net/vdev_netvsc: not in enabled drivers build config 00:08:14.237 net/vhost: not in enabled drivers build config 00:08:14.237 net/virtio: not in enabled drivers build config 00:08:14.237 net/vmxnet3: not in enabled drivers build config 00:08:14.237 raw/*: missing internal dependency, "rawdev" 00:08:14.237 crypto/armv8: not in enabled drivers build config 00:08:14.237 crypto/bcmfs: not in enabled drivers build config 00:08:14.237 crypto/caam_jr: not in enabled drivers build config 00:08:14.237 crypto/ccp: not in enabled drivers build config 00:08:14.237 crypto/cnxk: not in enabled drivers build config 00:08:14.237 crypto/dpaa_sec: not in enabled drivers build config 00:08:14.237 crypto/dpaa2_sec: not in enabled drivers build config 00:08:14.237 crypto/ipsec_mb: not in enabled drivers build config 00:08:14.237 crypto/mlx5: not in enabled drivers build config 00:08:14.237 crypto/mvsam: not in enabled drivers build config 00:08:14.237 crypto/nitrox: not in enabled drivers build config 00:08:14.237 crypto/null: not in enabled drivers build config 00:08:14.237 crypto/octeontx: not in enabled drivers build config 00:08:14.237 crypto/openssl: not in enabled drivers build config 00:08:14.237 crypto/scheduler: not in enabled drivers build config 00:08:14.237 crypto/uadk: not in enabled drivers build config 00:08:14.237 crypto/virtio: not in enabled drivers build config 00:08:14.237 compress/isal: not in enabled drivers build config 00:08:14.237 compress/mlx5: not in enabled drivers build config 00:08:14.237 compress/nitrox: not in enabled drivers build config 00:08:14.237 compress/octeontx: not in enabled drivers build config 00:08:14.237 compress/zlib: not in enabled drivers build config 00:08:14.237 regex/*: missing internal dependency, "regexdev" 00:08:14.237 ml/*: missing internal dependency, "mldev" 00:08:14.237 vdpa/ifc: not in enabled drivers build config 00:08:14.237 vdpa/mlx5: not in enabled drivers build config 00:08:14.237 vdpa/nfp: not in enabled drivers build config 00:08:14.237 vdpa/sfc: not in enabled drivers build config 00:08:14.237 event/*: missing internal dependency, "eventdev" 00:08:14.237 baseband/*: missing internal dependency, "bbdev" 00:08:14.237 gpu/*: missing internal dependency, "gpudev" 00:08:14.237 00:08:14.237 00:08:14.495 Build targets in project: 85 00:08:14.495 00:08:14.495 DPDK 24.03.0 00:08:14.495 00:08:14.495 User defined options 00:08:14.495 buildtype : debug 00:08:14.495 default_library : shared 00:08:14.495 libdir : lib 00:08:14.495 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:14.495 b_sanitize : address 00:08:14.495 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:14.495 c_link_args : 00:08:14.495 cpu_instruction_set: native 00:08:14.495 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:08:14.495 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:08:14.495 enable_docs : false 00:08:14.495 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:08:14.495 enable_kmods : false 00:08:14.495 max_lcores : 128 00:08:14.495 tests : false 00:08:14.495 00:08:14.495 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:15.429 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:08:15.429 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:15.429 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:15.429 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:15.429 [4/268] Linking static target lib/librte_log.a 00:08:15.429 [5/268] Linking static target lib/librte_kvargs.a 00:08:15.429 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:15.994 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:15.994 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:15.994 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:15.994 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:15.994 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:15.994 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:15.994 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:15.994 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:15.994 [15/268] Linking static target lib/librte_telemetry.a 00:08:15.994 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:15.994 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:15.994 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:16.251 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.508 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:16.508 [21/268] Linking target lib/librte_log.so.24.1 00:08:16.508 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:16.508 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:16.508 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:16.765 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:16.765 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:16.765 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:16.765 [28/268] Linking target lib/librte_kvargs.so.24.1 00:08:16.765 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:16.765 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:16.765 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.022 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:17.022 [33/268] Linking target lib/librte_telemetry.so.24.1 00:08:17.022 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:17.022 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:17.281 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:17.281 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:17.281 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:17.281 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:17.281 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:17.281 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:17.539 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:17.539 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:17.539 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:17.539 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:17.539 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:17.796 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:17.796 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:17.796 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:18.054 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:18.055 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:18.055 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:18.055 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:18.312 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:18.312 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:18.312 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:18.312 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:18.312 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:18.569 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:18.569 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:18.569 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:18.569 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:18.569 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:18.828 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:18.828 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:19.086 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:19.086 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:19.086 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:19.086 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:19.086 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:19.347 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:19.347 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:19.347 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:19.347 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:19.347 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:19.609 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:19.609 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:19.609 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:19.866 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:19.866 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:19.866 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:19.866 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:19.866 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:20.125 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:20.125 [85/268] Linking static target lib/librte_ring.a 00:08:20.125 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:20.125 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:20.383 [88/268] Linking static target lib/librte_eal.a 00:08:20.383 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:20.383 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:20.383 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:20.383 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:20.383 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:20.383 [94/268] Linking static target lib/librte_rcu.a 00:08:20.642 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:20.642 [96/268] Linking static target lib/librte_mempool.a 00:08:20.642 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.642 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:20.900 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:20.900 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:20.900 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:21.159 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.159 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:21.159 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:21.159 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:21.159 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:21.159 [107/268] Linking static target lib/librte_net.a 00:08:21.417 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:21.417 [109/268] Linking static target lib/librte_meter.a 00:08:21.417 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:21.417 [111/268] Linking static target lib/librte_mbuf.a 00:08:21.674 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:21.674 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:21.674 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:21.674 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:21.674 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.674 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.932 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.502 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:22.502 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:22.502 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:22.502 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.070 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:23.070 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:23.070 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:23.070 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:23.070 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:23.070 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:23.070 [129/268] Linking static target lib/librte_pci.a 00:08:23.070 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:23.070 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:23.070 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:23.329 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:23.329 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:23.329 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:23.329 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:23.329 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:23.329 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.329 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:23.586 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:23.586 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:23.586 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:23.586 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:23.586 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:23.843 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:23.843 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:23.843 [147/268] Linking static target lib/librte_cmdline.a 00:08:23.843 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:23.843 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:24.101 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:24.101 [151/268] Linking static target lib/librte_timer.a 00:08:24.360 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:24.360 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:24.360 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:24.360 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:24.618 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:24.618 [157/268] Linking static target lib/librte_ethdev.a 00:08:24.618 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.876 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:24.876 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:24.876 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:24.876 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:25.133 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:25.133 [164/268] Linking static target lib/librte_compressdev.a 00:08:25.133 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:25.133 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:25.133 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:25.133 [168/268] Linking static target lib/librte_dmadev.a 00:08:25.133 [169/268] Linking static target lib/librte_hash.a 00:08:25.392 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:25.392 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:25.392 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:25.392 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.650 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:25.909 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:25.909 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:25.909 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:26.167 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.167 [179/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.167 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:26.167 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:26.167 [182/268] Linking static target lib/librte_cryptodev.a 00:08:26.167 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:26.425 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.425 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:26.683 [186/268] Linking static target lib/librte_power.a 00:08:26.683 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:26.683 [188/268] Linking static target lib/librte_reorder.a 00:08:26.683 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:26.683 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:26.941 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:26.941 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:26.941 [193/268] Linking static target lib/librte_security.a 00:08:27.200 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.767 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.767 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.767 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:27.767 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:27.767 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:28.025 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:28.284 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:28.284 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:28.543 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:28.543 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.543 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:28.543 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:28.543 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:28.801 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:28.801 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:28.801 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:28.801 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:29.059 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:29.059 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:29.059 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:29.059 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:29.059 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:29.318 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:29.318 [218/268] Linking static target drivers/librte_bus_vdev.a 00:08:29.318 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:29.318 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:29.318 [221/268] Linking static target drivers/librte_bus_pci.a 00:08:29.318 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:29.318 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:29.576 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:29.576 [225/268] Linking static target drivers/librte_mempool_ring.a 00:08:29.576 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.576 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.949 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.949 [229/268] Linking target lib/librte_eal.so.24.1 00:08:31.207 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:31.207 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:31.207 [232/268] Linking target lib/librte_meter.so.24.1 00:08:31.207 [233/268] Linking target lib/librte_pci.so.24.1 00:08:31.207 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:31.207 [235/268] Linking target lib/librte_dmadev.so.24.1 00:08:31.207 [236/268] Linking target lib/librte_ring.so.24.1 00:08:31.207 [237/268] Linking target lib/librte_timer.so.24.1 00:08:31.465 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:31.465 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:31.465 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:31.465 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:31.465 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:31.465 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:31.465 [244/268] Linking target lib/librte_rcu.so.24.1 00:08:31.465 [245/268] Linking target lib/librte_mempool.so.24.1 00:08:31.739 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:31.739 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:31.739 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:31.739 [249/268] Linking target lib/librte_mbuf.so.24.1 00:08:32.024 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:32.024 [251/268] Linking target lib/librte_reorder.so.24.1 00:08:32.024 [252/268] Linking target lib/librte_net.so.24.1 00:08:32.024 [253/268] Linking target lib/librte_compressdev.so.24.1 00:08:32.024 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:08:32.024 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:32.024 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:32.024 [257/268] Linking target lib/librte_hash.so.24.1 00:08:32.024 [258/268] Linking target lib/librte_cmdline.so.24.1 00:08:32.024 [259/268] Linking target lib/librte_security.so.24.1 00:08:32.283 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:33.216 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.216 [262/268] Linking target lib/librte_ethdev.so.24.1 00:08:33.474 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:33.474 [264/268] Linking target lib/librte_power.so.24.1 00:08:35.373 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:35.373 [266/268] Linking static target lib/librte_vhost.a 00:08:37.900 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.900 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:37.900 INFO: autodetecting backend as ninja 00:08:37.900 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:04.433 CC lib/ut/ut.o 00:09:04.433 CC lib/ut_mock/mock.o 00:09:04.433 CC lib/log/log.o 00:09:04.433 CC lib/log/log_flags.o 00:09:04.433 CC lib/log/log_deprecated.o 00:09:04.433 LIB libspdk_ut.a 00:09:04.433 LIB libspdk_ut_mock.a 00:09:04.433 LIB libspdk_log.a 00:09:04.433 SO libspdk_ut.so.2.0 00:09:04.433 SO libspdk_ut_mock.so.6.0 00:09:04.433 SO libspdk_log.so.7.1 00:09:04.433 SYMLINK libspdk_ut.so 00:09:04.433 SYMLINK libspdk_ut_mock.so 00:09:04.433 SYMLINK libspdk_log.so 00:09:04.433 CXX lib/trace_parser/trace.o 00:09:04.433 CC lib/util/base64.o 00:09:04.433 CC lib/util/bit_array.o 00:09:04.433 CC lib/util/cpuset.o 00:09:04.433 CC lib/util/crc16.o 00:09:04.433 CC lib/util/crc32.o 00:09:04.433 CC lib/util/crc32c.o 00:09:04.433 CC lib/ioat/ioat.o 00:09:04.433 CC lib/dma/dma.o 00:09:04.433 CC lib/vfio_user/host/vfio_user_pci.o 00:09:04.433 CC lib/vfio_user/host/vfio_user.o 00:09:04.433 CC lib/util/crc32_ieee.o 00:09:04.433 CC lib/util/crc64.o 00:09:04.433 CC lib/util/dif.o 00:09:04.433 CC lib/util/fd.o 00:09:04.433 CC lib/util/fd_group.o 00:09:04.433 LIB libspdk_dma.a 00:09:04.433 LIB libspdk_ioat.a 00:09:04.433 SO libspdk_dma.so.5.0 00:09:04.433 CC lib/util/file.o 00:09:04.433 SO libspdk_ioat.so.7.0 00:09:04.433 CC lib/util/hexlify.o 00:09:04.433 CC lib/util/iov.o 00:09:04.433 SYMLINK libspdk_dma.so 00:09:04.433 CC lib/util/math.o 00:09:04.433 SYMLINK libspdk_ioat.so 00:09:04.433 CC lib/util/net.o 00:09:04.433 CC lib/util/pipe.o 00:09:04.433 CC lib/util/strerror_tls.o 00:09:04.433 CC lib/util/string.o 00:09:04.433 CC lib/util/uuid.o 00:09:04.433 LIB libspdk_vfio_user.a 00:09:04.433 CC lib/util/xor.o 00:09:04.433 SO libspdk_vfio_user.so.5.0 00:09:04.433 CC lib/util/zipf.o 00:09:04.433 CC lib/util/md5.o 00:09:04.433 SYMLINK libspdk_vfio_user.so 00:09:04.433 LIB libspdk_util.a 00:09:04.433 SO libspdk_util.so.10.1 00:09:04.433 LIB libspdk_trace_parser.a 00:09:04.433 SO libspdk_trace_parser.so.6.0 00:09:04.433 SYMLINK libspdk_util.so 00:09:04.433 SYMLINK libspdk_trace_parser.so 00:09:04.433 CC lib/conf/conf.o 00:09:04.433 CC lib/idxd/idxd.o 00:09:04.433 CC lib/vmd/vmd.o 00:09:04.433 CC lib/idxd/idxd_user.o 00:09:04.433 CC lib/rdma_utils/rdma_utils.o 00:09:04.433 CC lib/vmd/led.o 00:09:04.433 CC lib/idxd/idxd_kernel.o 00:09:04.433 CC lib/env_dpdk/env.o 00:09:04.433 CC lib/env_dpdk/memory.o 00:09:04.433 CC lib/json/json_parse.o 00:09:04.433 CC lib/json/json_util.o 00:09:04.433 CC lib/json/json_write.o 00:09:04.433 LIB libspdk_conf.a 00:09:04.433 SO libspdk_conf.so.6.0 00:09:04.433 CC lib/env_dpdk/pci.o 00:09:04.433 CC lib/env_dpdk/init.o 00:09:04.433 SYMLINK libspdk_conf.so 00:09:04.433 LIB libspdk_rdma_utils.a 00:09:04.433 CC lib/env_dpdk/threads.o 00:09:04.433 SO libspdk_rdma_utils.so.1.0 00:09:04.433 SYMLINK libspdk_rdma_utils.so 00:09:04.433 CC lib/env_dpdk/pci_ioat.o 00:09:04.433 CC lib/env_dpdk/pci_virtio.o 00:09:04.433 CC lib/env_dpdk/pci_vmd.o 00:09:04.433 LIB libspdk_json.a 00:09:04.433 SO libspdk_json.so.6.0 00:09:04.433 CC lib/env_dpdk/pci_idxd.o 00:09:04.433 SYMLINK libspdk_json.so 00:09:04.433 CC lib/env_dpdk/pci_event.o 00:09:04.433 CC lib/env_dpdk/sigbus_handler.o 00:09:04.433 CC lib/env_dpdk/pci_dpdk.o 00:09:04.433 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:04.433 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:04.433 CC lib/rdma_provider/common.o 00:09:04.691 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:04.691 LIB libspdk_idxd.a 00:09:04.691 LIB libspdk_vmd.a 00:09:04.691 SO libspdk_idxd.so.12.1 00:09:04.691 SO libspdk_vmd.so.6.0 00:09:04.949 SYMLINK libspdk_idxd.so 00:09:04.949 CC lib/jsonrpc/jsonrpc_server.o 00:09:04.949 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:04.949 CC lib/jsonrpc/jsonrpc_client.o 00:09:04.949 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:04.949 SYMLINK libspdk_vmd.so 00:09:04.949 LIB libspdk_rdma_provider.a 00:09:04.949 SO libspdk_rdma_provider.so.7.0 00:09:05.207 SYMLINK libspdk_rdma_provider.so 00:09:05.207 LIB libspdk_jsonrpc.a 00:09:05.207 SO libspdk_jsonrpc.so.6.0 00:09:05.465 SYMLINK libspdk_jsonrpc.so 00:09:05.731 CC lib/rpc/rpc.o 00:09:05.731 LIB libspdk_env_dpdk.a 00:09:05.731 SO libspdk_env_dpdk.so.15.1 00:09:05.996 LIB libspdk_rpc.a 00:09:05.996 SYMLINK libspdk_env_dpdk.so 00:09:05.996 SO libspdk_rpc.so.6.0 00:09:06.255 SYMLINK libspdk_rpc.so 00:09:06.255 CC lib/notify/notify.o 00:09:06.255 CC lib/trace/trace.o 00:09:06.255 CC lib/keyring/keyring.o 00:09:06.255 CC lib/trace/trace_flags.o 00:09:06.255 CC lib/notify/notify_rpc.o 00:09:06.255 CC lib/keyring/keyring_rpc.o 00:09:06.255 CC lib/trace/trace_rpc.o 00:09:06.513 LIB libspdk_notify.a 00:09:06.797 SO libspdk_notify.so.6.0 00:09:06.797 SYMLINK libspdk_notify.so 00:09:06.797 LIB libspdk_keyring.a 00:09:06.797 SO libspdk_keyring.so.2.0 00:09:06.797 LIB libspdk_trace.a 00:09:07.057 SO libspdk_trace.so.11.0 00:09:07.057 SYMLINK libspdk_keyring.so 00:09:07.057 SYMLINK libspdk_trace.so 00:09:07.315 CC lib/thread/thread.o 00:09:07.315 CC lib/thread/iobuf.o 00:09:07.315 CC lib/sock/sock.o 00:09:07.315 CC lib/sock/sock_rpc.o 00:09:07.882 LIB libspdk_sock.a 00:09:07.882 SO libspdk_sock.so.10.0 00:09:08.141 SYMLINK libspdk_sock.so 00:09:08.401 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:08.401 CC lib/nvme/nvme_ctrlr.o 00:09:08.401 CC lib/nvme/nvme_fabric.o 00:09:08.401 CC lib/nvme/nvme_ns_cmd.o 00:09:08.401 CC lib/nvme/nvme_pcie.o 00:09:08.401 CC lib/nvme/nvme_qpair.o 00:09:08.401 CC lib/nvme/nvme.o 00:09:08.401 CC lib/nvme/nvme_pcie_common.o 00:09:08.401 CC lib/nvme/nvme_ns.o 00:09:09.340 CC lib/nvme/nvme_quirks.o 00:09:09.598 CC lib/nvme/nvme_transport.o 00:09:09.598 CC lib/nvme/nvme_discovery.o 00:09:09.598 LIB libspdk_thread.a 00:09:09.598 SO libspdk_thread.so.11.0 00:09:09.856 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:09.856 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:09.856 SYMLINK libspdk_thread.so 00:09:09.856 CC lib/nvme/nvme_tcp.o 00:09:09.856 CC lib/nvme/nvme_opal.o 00:09:10.114 CC lib/nvme/nvme_io_msg.o 00:09:10.372 CC lib/nvme/nvme_poll_group.o 00:09:10.372 CC lib/accel/accel.o 00:09:10.372 CC lib/accel/accel_rpc.o 00:09:10.372 CC lib/accel/accel_sw.o 00:09:10.372 CC lib/nvme/nvme_zns.o 00:09:10.630 CC lib/nvme/nvme_stubs.o 00:09:10.888 CC lib/blob/blobstore.o 00:09:11.146 CC lib/blob/request.o 00:09:11.146 CC lib/blob/zeroes.o 00:09:11.146 CC lib/init/json_config.o 00:09:11.404 CC lib/init/subsystem.o 00:09:11.404 CC lib/virtio/virtio.o 00:09:11.404 CC lib/init/subsystem_rpc.o 00:09:11.404 CC lib/init/rpc.o 00:09:11.663 CC lib/blob/blob_bs_dev.o 00:09:11.663 CC lib/virtio/virtio_vhost_user.o 00:09:11.663 CC lib/virtio/virtio_vfio_user.o 00:09:11.921 LIB libspdk_init.a 00:09:11.921 CC lib/fsdev/fsdev.o 00:09:11.921 CC lib/virtio/virtio_pci.o 00:09:11.921 SO libspdk_init.so.6.0 00:09:11.921 SYMLINK libspdk_init.so 00:09:11.921 CC lib/fsdev/fsdev_io.o 00:09:12.179 LIB libspdk_accel.a 00:09:12.179 CC lib/nvme/nvme_auth.o 00:09:12.179 SO libspdk_accel.so.16.0 00:09:12.179 SYMLINK libspdk_accel.so 00:09:12.179 CC lib/fsdev/fsdev_rpc.o 00:09:12.437 LIB libspdk_virtio.a 00:09:12.437 CC lib/event/app.o 00:09:12.437 SO libspdk_virtio.so.7.0 00:09:12.437 CC lib/event/reactor.o 00:09:12.437 CC lib/bdev/bdev.o 00:09:12.695 SYMLINK libspdk_virtio.so 00:09:12.695 CC lib/event/log_rpc.o 00:09:12.695 CC lib/event/app_rpc.o 00:09:12.953 CC lib/event/scheduler_static.o 00:09:12.953 CC lib/nvme/nvme_cuse.o 00:09:13.211 CC lib/bdev/bdev_rpc.o 00:09:13.211 CC lib/nvme/nvme_rdma.o 00:09:13.211 CC lib/bdev/bdev_zone.o 00:09:13.211 LIB libspdk_fsdev.a 00:09:13.211 CC lib/bdev/part.o 00:09:13.211 SO libspdk_fsdev.so.2.0 00:09:13.469 LIB libspdk_event.a 00:09:13.469 SYMLINK libspdk_fsdev.so 00:09:13.469 CC lib/bdev/scsi_nvme.o 00:09:13.469 SO libspdk_event.so.14.0 00:09:13.727 SYMLINK libspdk_event.so 00:09:13.727 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:15.103 LIB libspdk_fuse_dispatcher.a 00:09:15.103 SO libspdk_fuse_dispatcher.so.1.0 00:09:15.103 SYMLINK libspdk_fuse_dispatcher.so 00:09:16.037 LIB libspdk_nvme.a 00:09:16.037 LIB libspdk_blob.a 00:09:16.037 SO libspdk_blob.so.12.0 00:09:16.037 SO libspdk_nvme.so.15.0 00:09:16.296 SYMLINK libspdk_blob.so 00:09:16.555 CC lib/lvol/lvol.o 00:09:16.555 CC lib/blobfs/blobfs.o 00:09:16.555 CC lib/blobfs/tree.o 00:09:16.555 SYMLINK libspdk_nvme.so 00:09:17.935 LIB libspdk_bdev.a 00:09:17.935 LIB libspdk_blobfs.a 00:09:17.935 SO libspdk_blobfs.so.11.0 00:09:17.935 SO libspdk_bdev.so.17.0 00:09:17.935 SYMLINK libspdk_blobfs.so 00:09:17.935 SYMLINK libspdk_bdev.so 00:09:18.194 CC lib/nbd/nbd.o 00:09:18.194 CC lib/ftl/ftl_core.o 00:09:18.194 CC lib/ftl/ftl_init.o 00:09:18.194 CC lib/ftl/ftl_layout.o 00:09:18.194 CC lib/ftl/ftl_debug.o 00:09:18.194 CC lib/nbd/nbd_rpc.o 00:09:18.194 CC lib/ublk/ublk.o 00:09:18.194 CC lib/scsi/dev.o 00:09:18.194 CC lib/nvmf/ctrlr.o 00:09:18.194 LIB libspdk_lvol.a 00:09:18.194 SO libspdk_lvol.so.11.0 00:09:18.453 CC lib/scsi/lun.o 00:09:18.453 SYMLINK libspdk_lvol.so 00:09:18.453 CC lib/ftl/ftl_io.o 00:09:18.453 CC lib/ftl/ftl_sb.o 00:09:18.453 CC lib/ftl/ftl_l2p.o 00:09:18.453 CC lib/ftl/ftl_l2p_flat.o 00:09:18.712 LIB libspdk_nbd.a 00:09:18.712 SO libspdk_nbd.so.7.0 00:09:18.712 CC lib/ftl/ftl_nv_cache.o 00:09:18.712 CC lib/ftl/ftl_band.o 00:09:18.970 CC lib/ublk/ublk_rpc.o 00:09:18.970 SYMLINK libspdk_nbd.so 00:09:18.970 CC lib/ftl/ftl_band_ops.o 00:09:18.970 CC lib/ftl/ftl_writer.o 00:09:18.970 CC lib/scsi/port.o 00:09:18.970 CC lib/ftl/ftl_rq.o 00:09:18.970 CC lib/ftl/ftl_reloc.o 00:09:19.229 CC lib/ftl/ftl_l2p_cache.o 00:09:19.229 LIB libspdk_ublk.a 00:09:19.229 CC lib/scsi/scsi.o 00:09:19.229 SO libspdk_ublk.so.3.0 00:09:19.229 CC lib/ftl/ftl_p2l.o 00:09:19.229 CC lib/ftl/ftl_p2l_log.o 00:09:19.487 SYMLINK libspdk_ublk.so 00:09:19.487 CC lib/ftl/mngt/ftl_mngt.o 00:09:19.487 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:19.487 CC lib/scsi/scsi_bdev.o 00:09:19.487 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:19.744 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:19.744 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:19.744 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:19.744 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:20.002 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:20.002 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:20.002 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:20.002 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:20.002 CC lib/scsi/scsi_pr.o 00:09:20.261 CC lib/nvmf/ctrlr_discovery.o 00:09:20.261 CC lib/scsi/scsi_rpc.o 00:09:20.261 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:20.261 CC lib/nvmf/ctrlr_bdev.o 00:09:20.261 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:20.520 CC lib/scsi/task.o 00:09:20.520 CC lib/ftl/utils/ftl_conf.o 00:09:20.520 CC lib/nvmf/subsystem.o 00:09:20.520 CC lib/nvmf/nvmf.o 00:09:20.520 CC lib/ftl/utils/ftl_md.o 00:09:20.520 CC lib/nvmf/nvmf_rpc.o 00:09:20.778 CC lib/ftl/utils/ftl_mempool.o 00:09:20.778 LIB libspdk_scsi.a 00:09:21.036 CC lib/nvmf/transport.o 00:09:21.036 SO libspdk_scsi.so.9.0 00:09:21.036 CC lib/ftl/utils/ftl_bitmap.o 00:09:21.036 SYMLINK libspdk_scsi.so 00:09:21.036 CC lib/ftl/utils/ftl_property.o 00:09:21.295 CC lib/nvmf/tcp.o 00:09:21.295 CC lib/nvmf/stubs.o 00:09:21.295 CC lib/nvmf/mdns_server.o 00:09:21.295 CC lib/nvmf/rdma.o 00:09:21.554 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:21.554 CC lib/iscsi/conn.o 00:09:21.814 CC lib/iscsi/init_grp.o 00:09:21.814 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:22.074 CC lib/nvmf/auth.o 00:09:22.074 CC lib/iscsi/iscsi.o 00:09:22.074 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:22.333 CC lib/vhost/vhost.o 00:09:22.591 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:22.591 CC lib/iscsi/param.o 00:09:22.591 CC lib/iscsi/portal_grp.o 00:09:22.872 CC lib/iscsi/tgt_node.o 00:09:22.872 CC lib/iscsi/iscsi_subsystem.o 00:09:22.872 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:22.872 CC lib/iscsi/iscsi_rpc.o 00:09:23.177 CC lib/iscsi/task.o 00:09:23.177 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:23.436 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:23.436 CC lib/vhost/vhost_rpc.o 00:09:23.694 CC lib/vhost/vhost_scsi.o 00:09:23.694 CC lib/vhost/vhost_blk.o 00:09:23.694 CC lib/vhost/rte_vhost_user.o 00:09:23.694 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:23.694 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:23.953 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:23.953 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:23.953 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:24.211 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:24.211 CC lib/ftl/base/ftl_base_dev.o 00:09:24.470 CC lib/ftl/base/ftl_base_bdev.o 00:09:24.470 CC lib/ftl/ftl_trace.o 00:09:24.730 LIB libspdk_ftl.a 00:09:24.730 LIB libspdk_iscsi.a 00:09:24.988 SO libspdk_iscsi.so.8.0 00:09:24.988 SO libspdk_ftl.so.9.0 00:09:25.246 SYMLINK libspdk_iscsi.so 00:09:25.246 LIB libspdk_nvmf.a 00:09:25.505 LIB libspdk_vhost.a 00:09:25.505 SYMLINK libspdk_ftl.so 00:09:25.505 SO libspdk_vhost.so.8.0 00:09:25.505 SO libspdk_nvmf.so.20.0 00:09:25.763 SYMLINK libspdk_vhost.so 00:09:25.763 SYMLINK libspdk_nvmf.so 00:09:26.332 CC module/env_dpdk/env_dpdk_rpc.o 00:09:26.332 CC module/accel/error/accel_error.o 00:09:26.332 CC module/accel/iaa/accel_iaa.o 00:09:26.332 CC module/blob/bdev/blob_bdev.o 00:09:26.332 CC module/accel/dsa/accel_dsa.o 00:09:26.332 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:26.332 CC module/sock/posix/posix.o 00:09:26.332 CC module/accel/ioat/accel_ioat.o 00:09:26.332 CC module/keyring/file/keyring.o 00:09:26.332 CC module/fsdev/aio/fsdev_aio.o 00:09:26.332 LIB libspdk_env_dpdk_rpc.a 00:09:26.332 SO libspdk_env_dpdk_rpc.so.6.0 00:09:26.590 CC module/keyring/file/keyring_rpc.o 00:09:26.590 SYMLINK libspdk_env_dpdk_rpc.so 00:09:26.590 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:26.590 LIB libspdk_scheduler_dynamic.a 00:09:26.590 CC module/accel/error/accel_error_rpc.o 00:09:26.590 CC module/accel/ioat/accel_ioat_rpc.o 00:09:26.590 CC module/accel/iaa/accel_iaa_rpc.o 00:09:26.590 SO libspdk_scheduler_dynamic.so.4.0 00:09:26.590 LIB libspdk_keyring_file.a 00:09:26.590 SYMLINK libspdk_scheduler_dynamic.so 00:09:26.590 LIB libspdk_blob_bdev.a 00:09:26.590 CC module/accel/dsa/accel_dsa_rpc.o 00:09:26.590 SO libspdk_keyring_file.so.2.0 00:09:26.590 SO libspdk_blob_bdev.so.12.0 00:09:26.848 LIB libspdk_accel_error.a 00:09:26.848 LIB libspdk_accel_ioat.a 00:09:26.848 LIB libspdk_accel_iaa.a 00:09:26.848 SO libspdk_accel_error.so.2.0 00:09:26.848 SYMLINK libspdk_keyring_file.so 00:09:26.848 SO libspdk_accel_ioat.so.6.0 00:09:26.848 SO libspdk_accel_iaa.so.3.0 00:09:26.848 SYMLINK libspdk_blob_bdev.so 00:09:26.848 LIB libspdk_accel_dsa.a 00:09:26.848 SYMLINK libspdk_accel_error.so 00:09:26.848 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:26.848 SYMLINK libspdk_accel_iaa.so 00:09:26.848 CC module/fsdev/aio/linux_aio_mgr.o 00:09:26.848 SO libspdk_accel_dsa.so.5.0 00:09:26.848 SYMLINK libspdk_accel_ioat.so 00:09:26.848 CC module/keyring/linux/keyring.o 00:09:26.848 CC module/keyring/linux/keyring_rpc.o 00:09:26.848 SYMLINK libspdk_accel_dsa.so 00:09:27.164 CC module/scheduler/gscheduler/gscheduler.o 00:09:27.164 LIB libspdk_scheduler_dpdk_governor.a 00:09:27.164 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:27.164 LIB libspdk_keyring_linux.a 00:09:27.164 CC module/bdev/delay/vbdev_delay.o 00:09:27.164 SO libspdk_keyring_linux.so.1.0 00:09:27.164 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:27.164 CC module/blobfs/bdev/blobfs_bdev.o 00:09:27.164 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:27.164 CC module/bdev/error/vbdev_error.o 00:09:27.164 LIB libspdk_scheduler_gscheduler.a 00:09:27.164 LIB libspdk_fsdev_aio.a 00:09:27.164 SYMLINK libspdk_keyring_linux.so 00:09:27.164 SO libspdk_scheduler_gscheduler.so.4.0 00:09:27.423 SO libspdk_fsdev_aio.so.1.0 00:09:27.423 LIB libspdk_sock_posix.a 00:09:27.423 SYMLINK libspdk_scheduler_gscheduler.so 00:09:27.423 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:27.423 CC module/bdev/gpt/gpt.o 00:09:27.423 CC module/bdev/lvol/vbdev_lvol.o 00:09:27.423 SO libspdk_sock_posix.so.6.0 00:09:27.423 SYMLINK libspdk_fsdev_aio.so 00:09:27.423 CC module/bdev/gpt/vbdev_gpt.o 00:09:27.423 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:27.423 SYMLINK libspdk_sock_posix.so 00:09:27.423 CC module/bdev/malloc/bdev_malloc.o 00:09:27.423 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:27.682 LIB libspdk_blobfs_bdev.a 00:09:27.682 CC module/bdev/null/bdev_null.o 00:09:27.682 SO libspdk_blobfs_bdev.so.6.0 00:09:27.682 CC module/bdev/error/vbdev_error_rpc.o 00:09:27.682 LIB libspdk_bdev_delay.a 00:09:27.682 SO libspdk_bdev_delay.so.6.0 00:09:27.682 SYMLINK libspdk_blobfs_bdev.so 00:09:27.682 CC module/bdev/null/bdev_null_rpc.o 00:09:27.682 SYMLINK libspdk_bdev_delay.so 00:09:27.683 LIB libspdk_bdev_gpt.a 00:09:27.683 SO libspdk_bdev_gpt.so.6.0 00:09:27.683 CC module/bdev/nvme/bdev_nvme.o 00:09:27.683 LIB libspdk_bdev_error.a 00:09:27.941 SO libspdk_bdev_error.so.6.0 00:09:27.941 SYMLINK libspdk_bdev_gpt.so 00:09:27.941 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:27.941 SYMLINK libspdk_bdev_error.so 00:09:27.941 CC module/bdev/passthru/vbdev_passthru.o 00:09:27.941 CC module/bdev/raid/bdev_raid.o 00:09:27.941 LIB libspdk_bdev_malloc.a 00:09:27.941 LIB libspdk_bdev_null.a 00:09:27.941 SO libspdk_bdev_malloc.so.6.0 00:09:27.941 SO libspdk_bdev_null.so.6.0 00:09:27.941 LIB libspdk_bdev_lvol.a 00:09:27.941 CC module/bdev/split/vbdev_split.o 00:09:28.200 SYMLINK libspdk_bdev_null.so 00:09:28.200 SYMLINK libspdk_bdev_malloc.so 00:09:28.200 CC module/bdev/nvme/nvme_rpc.o 00:09:28.200 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:28.200 SO libspdk_bdev_lvol.so.6.0 00:09:28.200 CC module/bdev/aio/bdev_aio.o 00:09:28.200 SYMLINK libspdk_bdev_lvol.so 00:09:28.200 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:28.200 CC module/bdev/ftl/bdev_ftl.o 00:09:28.200 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:28.458 CC module/bdev/split/vbdev_split_rpc.o 00:09:28.458 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:28.458 CC module/bdev/raid/bdev_raid_rpc.o 00:09:28.458 LIB libspdk_bdev_passthru.a 00:09:28.458 LIB libspdk_bdev_zone_block.a 00:09:28.458 SO libspdk_bdev_passthru.so.6.0 00:09:28.458 LIB libspdk_bdev_split.a 00:09:28.458 SO libspdk_bdev_zone_block.so.6.0 00:09:28.458 SO libspdk_bdev_split.so.6.0 00:09:28.458 CC module/bdev/aio/bdev_aio_rpc.o 00:09:28.716 CC module/bdev/raid/bdev_raid_sb.o 00:09:28.716 SYMLINK libspdk_bdev_zone_block.so 00:09:28.716 CC module/bdev/raid/raid0.o 00:09:28.716 SYMLINK libspdk_bdev_passthru.so 00:09:28.716 CC module/bdev/raid/raid1.o 00:09:28.716 LIB libspdk_bdev_ftl.a 00:09:28.716 SYMLINK libspdk_bdev_split.so 00:09:28.716 SO libspdk_bdev_ftl.so.6.0 00:09:28.716 CC module/bdev/raid/concat.o 00:09:28.716 SYMLINK libspdk_bdev_ftl.so 00:09:28.716 LIB libspdk_bdev_aio.a 00:09:28.716 CC module/bdev/nvme/bdev_mdns_client.o 00:09:28.974 SO libspdk_bdev_aio.so.6.0 00:09:28.974 CC module/bdev/raid/raid5f.o 00:09:28.974 SYMLINK libspdk_bdev_aio.so 00:09:28.974 CC module/bdev/nvme/vbdev_opal.o 00:09:28.974 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:28.974 CC module/bdev/iscsi/bdev_iscsi.o 00:09:28.974 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:28.974 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:29.233 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:29.233 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:29.233 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:29.490 LIB libspdk_bdev_iscsi.a 00:09:29.490 SO libspdk_bdev_iscsi.so.6.0 00:09:29.748 SYMLINK libspdk_bdev_iscsi.so 00:09:29.748 LIB libspdk_bdev_virtio.a 00:09:29.748 SO libspdk_bdev_virtio.so.6.0 00:09:29.748 LIB libspdk_bdev_raid.a 00:09:29.748 SYMLINK libspdk_bdev_virtio.so 00:09:30.007 SO libspdk_bdev_raid.so.6.0 00:09:30.007 SYMLINK libspdk_bdev_raid.so 00:09:31.942 LIB libspdk_bdev_nvme.a 00:09:31.942 SO libspdk_bdev_nvme.so.7.1 00:09:31.942 SYMLINK libspdk_bdev_nvme.so 00:09:32.545 CC module/event/subsystems/scheduler/scheduler.o 00:09:32.545 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:32.545 CC module/event/subsystems/keyring/keyring.o 00:09:32.545 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:32.545 CC module/event/subsystems/vmd/vmd.o 00:09:32.545 CC module/event/subsystems/fsdev/fsdev.o 00:09:32.545 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:32.545 CC module/event/subsystems/iobuf/iobuf.o 00:09:32.545 CC module/event/subsystems/sock/sock.o 00:09:32.545 LIB libspdk_event_vhost_blk.a 00:09:32.545 LIB libspdk_event_scheduler.a 00:09:32.545 SO libspdk_event_vhost_blk.so.3.0 00:09:32.545 LIB libspdk_event_vmd.a 00:09:32.545 LIB libspdk_event_keyring.a 00:09:32.545 LIB libspdk_event_sock.a 00:09:32.545 SO libspdk_event_scheduler.so.4.0 00:09:32.805 LIB libspdk_event_iobuf.a 00:09:32.805 SO libspdk_event_vmd.so.6.0 00:09:32.805 SO libspdk_event_keyring.so.1.0 00:09:32.805 SYMLINK libspdk_event_vhost_blk.so 00:09:32.805 SO libspdk_event_sock.so.5.0 00:09:32.805 LIB libspdk_event_fsdev.a 00:09:32.805 SO libspdk_event_iobuf.so.3.0 00:09:32.805 SO libspdk_event_fsdev.so.1.0 00:09:32.805 SYMLINK libspdk_event_scheduler.so 00:09:32.805 SYMLINK libspdk_event_vmd.so 00:09:32.805 SYMLINK libspdk_event_sock.so 00:09:32.805 SYMLINK libspdk_event_keyring.so 00:09:32.805 SYMLINK libspdk_event_fsdev.so 00:09:32.805 SYMLINK libspdk_event_iobuf.so 00:09:33.064 CC module/event/subsystems/accel/accel.o 00:09:33.324 LIB libspdk_event_accel.a 00:09:33.324 SO libspdk_event_accel.so.6.0 00:09:33.324 SYMLINK libspdk_event_accel.so 00:09:33.891 CC module/event/subsystems/bdev/bdev.o 00:09:34.150 LIB libspdk_event_bdev.a 00:09:34.150 SO libspdk_event_bdev.so.6.0 00:09:34.150 SYMLINK libspdk_event_bdev.so 00:09:34.409 CC module/event/subsystems/ublk/ublk.o 00:09:34.409 CC module/event/subsystems/nbd/nbd.o 00:09:34.409 CC module/event/subsystems/scsi/scsi.o 00:09:34.409 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:34.409 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:34.668 LIB libspdk_event_nbd.a 00:09:34.668 SO libspdk_event_nbd.so.6.0 00:09:34.668 LIB libspdk_event_scsi.a 00:09:34.668 LIB libspdk_event_ublk.a 00:09:34.668 SO libspdk_event_scsi.so.6.0 00:09:34.668 SO libspdk_event_ublk.so.3.0 00:09:34.668 SYMLINK libspdk_event_nbd.so 00:09:34.668 LIB libspdk_event_nvmf.a 00:09:34.668 SYMLINK libspdk_event_scsi.so 00:09:34.668 SYMLINK libspdk_event_ublk.so 00:09:34.668 SO libspdk_event_nvmf.so.6.0 00:09:34.927 SYMLINK libspdk_event_nvmf.so 00:09:35.186 CC module/event/subsystems/iscsi/iscsi.o 00:09:35.186 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:35.186 LIB libspdk_event_vhost_scsi.a 00:09:35.186 LIB libspdk_event_iscsi.a 00:09:35.444 SO libspdk_event_vhost_scsi.so.3.0 00:09:35.444 SO libspdk_event_iscsi.so.6.0 00:09:35.444 SYMLINK libspdk_event_vhost_scsi.so 00:09:35.444 SYMLINK libspdk_event_iscsi.so 00:09:35.702 SO libspdk.so.6.0 00:09:35.702 SYMLINK libspdk.so 00:09:35.960 CC app/trace_record/trace_record.o 00:09:35.960 CC app/spdk_lspci/spdk_lspci.o 00:09:35.960 CXX app/trace/trace.o 00:09:35.960 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:35.960 CC app/nvmf_tgt/nvmf_main.o 00:09:35.960 CC examples/ioat/perf/perf.o 00:09:35.960 CC app/spdk_tgt/spdk_tgt.o 00:09:35.960 CC app/iscsi_tgt/iscsi_tgt.o 00:09:35.960 CC test/thread/poller_perf/poller_perf.o 00:09:35.960 CC examples/util/zipf/zipf.o 00:09:36.218 LINK spdk_lspci 00:09:36.218 LINK interrupt_tgt 00:09:36.218 LINK poller_perf 00:09:36.218 LINK ioat_perf 00:09:36.218 LINK spdk_tgt 00:09:36.218 LINK spdk_trace_record 00:09:36.218 LINK nvmf_tgt 00:09:36.218 LINK zipf 00:09:36.218 LINK iscsi_tgt 00:09:36.477 LINK spdk_trace 00:09:36.477 CC app/spdk_nvme_perf/perf.o 00:09:36.477 CC examples/ioat/verify/verify.o 00:09:36.477 CC app/spdk_nvme_identify/identify.o 00:09:36.477 CC app/spdk_nvme_discover/discovery_aer.o 00:09:36.477 CC app/spdk_top/spdk_top.o 00:09:36.477 CC test/dma/test_dma/test_dma.o 00:09:36.477 CC test/app/bdev_svc/bdev_svc.o 00:09:36.736 CC app/spdk_dd/spdk_dd.o 00:09:36.736 LINK verify 00:09:36.736 LINK spdk_nvme_discover 00:09:36.736 LINK bdev_svc 00:09:36.995 CC app/fio/nvme/fio_plugin.o 00:09:36.995 CC app/vhost/vhost.o 00:09:36.995 LINK spdk_dd 00:09:37.253 CC examples/thread/thread/thread_ex.o 00:09:37.253 LINK test_dma 00:09:37.253 CC examples/sock/hello_world/hello_sock.o 00:09:37.253 LINK vhost 00:09:37.253 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:37.513 TEST_HEADER include/spdk/accel.h 00:09:37.513 TEST_HEADER include/spdk/accel_module.h 00:09:37.513 TEST_HEADER include/spdk/assert.h 00:09:37.513 TEST_HEADER include/spdk/barrier.h 00:09:37.513 TEST_HEADER include/spdk/base64.h 00:09:37.513 TEST_HEADER include/spdk/bdev.h 00:09:37.513 TEST_HEADER include/spdk/bdev_module.h 00:09:37.513 TEST_HEADER include/spdk/bdev_zone.h 00:09:37.513 TEST_HEADER include/spdk/bit_array.h 00:09:37.513 TEST_HEADER include/spdk/bit_pool.h 00:09:37.513 TEST_HEADER include/spdk/blob_bdev.h 00:09:37.513 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:37.513 TEST_HEADER include/spdk/blobfs.h 00:09:37.513 TEST_HEADER include/spdk/blob.h 00:09:37.513 TEST_HEADER include/spdk/conf.h 00:09:37.513 TEST_HEADER include/spdk/config.h 00:09:37.513 TEST_HEADER include/spdk/cpuset.h 00:09:37.513 TEST_HEADER include/spdk/crc16.h 00:09:37.513 TEST_HEADER include/spdk/crc32.h 00:09:37.513 TEST_HEADER include/spdk/crc64.h 00:09:37.513 TEST_HEADER include/spdk/dif.h 00:09:37.513 TEST_HEADER include/spdk/dma.h 00:09:37.513 TEST_HEADER include/spdk/endian.h 00:09:37.513 TEST_HEADER include/spdk/env_dpdk.h 00:09:37.513 TEST_HEADER include/spdk/env.h 00:09:37.513 TEST_HEADER include/spdk/event.h 00:09:37.513 TEST_HEADER include/spdk/fd_group.h 00:09:37.513 TEST_HEADER include/spdk/fd.h 00:09:37.513 TEST_HEADER include/spdk/file.h 00:09:37.513 TEST_HEADER include/spdk/fsdev.h 00:09:37.513 TEST_HEADER include/spdk/fsdev_module.h 00:09:37.513 TEST_HEADER include/spdk/ftl.h 00:09:37.513 TEST_HEADER include/spdk/gpt_spec.h 00:09:37.513 TEST_HEADER include/spdk/hexlify.h 00:09:37.513 TEST_HEADER include/spdk/histogram_data.h 00:09:37.513 LINK thread 00:09:37.513 TEST_HEADER include/spdk/idxd.h 00:09:37.513 TEST_HEADER include/spdk/idxd_spec.h 00:09:37.513 TEST_HEADER include/spdk/init.h 00:09:37.513 TEST_HEADER include/spdk/ioat.h 00:09:37.513 TEST_HEADER include/spdk/ioat_spec.h 00:09:37.513 TEST_HEADER include/spdk/iscsi_spec.h 00:09:37.513 TEST_HEADER include/spdk/json.h 00:09:37.513 TEST_HEADER include/spdk/jsonrpc.h 00:09:37.513 TEST_HEADER include/spdk/keyring.h 00:09:37.513 TEST_HEADER include/spdk/keyring_module.h 00:09:37.513 TEST_HEADER include/spdk/likely.h 00:09:37.513 TEST_HEADER include/spdk/log.h 00:09:37.513 TEST_HEADER include/spdk/lvol.h 00:09:37.513 TEST_HEADER include/spdk/md5.h 00:09:37.513 TEST_HEADER include/spdk/memory.h 00:09:37.513 TEST_HEADER include/spdk/mmio.h 00:09:37.513 TEST_HEADER include/spdk/nbd.h 00:09:37.513 TEST_HEADER include/spdk/net.h 00:09:37.513 TEST_HEADER include/spdk/notify.h 00:09:37.513 TEST_HEADER include/spdk/nvme.h 00:09:37.513 TEST_HEADER include/spdk/nvme_intel.h 00:09:37.513 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:37.513 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:37.513 TEST_HEADER include/spdk/nvme_spec.h 00:09:37.513 TEST_HEADER include/spdk/nvme_zns.h 00:09:37.513 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:37.513 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:37.513 LINK hello_sock 00:09:37.513 TEST_HEADER include/spdk/nvmf.h 00:09:37.513 TEST_HEADER include/spdk/nvmf_spec.h 00:09:37.513 TEST_HEADER include/spdk/nvmf_transport.h 00:09:37.513 TEST_HEADER include/spdk/opal.h 00:09:37.513 TEST_HEADER include/spdk/opal_spec.h 00:09:37.513 TEST_HEADER include/spdk/pci_ids.h 00:09:37.513 TEST_HEADER include/spdk/pipe.h 00:09:37.513 TEST_HEADER include/spdk/queue.h 00:09:37.513 TEST_HEADER include/spdk/reduce.h 00:09:37.513 TEST_HEADER include/spdk/rpc.h 00:09:37.513 TEST_HEADER include/spdk/scheduler.h 00:09:37.513 TEST_HEADER include/spdk/scsi.h 00:09:37.513 TEST_HEADER include/spdk/scsi_spec.h 00:09:37.513 TEST_HEADER include/spdk/sock.h 00:09:37.513 TEST_HEADER include/spdk/stdinc.h 00:09:37.514 TEST_HEADER include/spdk/string.h 00:09:37.514 TEST_HEADER include/spdk/thread.h 00:09:37.514 TEST_HEADER include/spdk/trace.h 00:09:37.514 TEST_HEADER include/spdk/trace_parser.h 00:09:37.514 TEST_HEADER include/spdk/tree.h 00:09:37.514 TEST_HEADER include/spdk/ublk.h 00:09:37.514 TEST_HEADER include/spdk/util.h 00:09:37.514 TEST_HEADER include/spdk/uuid.h 00:09:37.514 TEST_HEADER include/spdk/version.h 00:09:37.514 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:37.514 CC examples/vmd/lsvmd/lsvmd.o 00:09:37.514 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:37.514 TEST_HEADER include/spdk/vhost.h 00:09:37.514 TEST_HEADER include/spdk/vmd.h 00:09:37.514 LINK spdk_nvme_identify 00:09:37.514 TEST_HEADER include/spdk/xor.h 00:09:37.514 TEST_HEADER include/spdk/zipf.h 00:09:37.514 CXX test/cpp_headers/accel.o 00:09:37.514 LINK spdk_nvme_perf 00:09:37.772 LINK spdk_top 00:09:37.772 LINK lsvmd 00:09:37.772 CXX test/cpp_headers/accel_module.o 00:09:37.772 CC examples/vmd/led/led.o 00:09:37.772 CXX test/cpp_headers/assert.o 00:09:37.772 CXX test/cpp_headers/barrier.o 00:09:37.772 CC test/app/histogram_perf/histogram_perf.o 00:09:38.030 LINK nvme_fuzz 00:09:38.030 CXX test/cpp_headers/base64.o 00:09:38.030 LINK spdk_nvme 00:09:38.030 LINK histogram_perf 00:09:38.030 LINK led 00:09:38.030 CXX test/cpp_headers/bdev.o 00:09:38.030 CC examples/idxd/perf/perf.o 00:09:38.289 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:38.289 CC examples/nvme/hello_world/hello_world.o 00:09:38.289 CC examples/accel/perf/accel_perf.o 00:09:38.289 CC app/fio/bdev/fio_plugin.o 00:09:38.289 CXX test/cpp_headers/bdev_module.o 00:09:38.289 CC examples/blob/hello_world/hello_blob.o 00:09:38.289 CXX test/cpp_headers/bdev_zone.o 00:09:38.289 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:38.548 CC examples/nvme/reconnect/reconnect.o 00:09:38.548 LINK hello_world 00:09:38.548 LINK hello_fsdev 00:09:38.548 CXX test/cpp_headers/bit_array.o 00:09:38.548 LINK idxd_perf 00:09:38.548 LINK hello_blob 00:09:38.548 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:38.806 CXX test/cpp_headers/bit_pool.o 00:09:38.806 LINK accel_perf 00:09:38.806 CC examples/nvme/arbitration/arbitration.o 00:09:38.806 CC examples/nvme/hotplug/hotplug.o 00:09:38.806 CC examples/blob/cli/blobcli.o 00:09:38.806 LINK spdk_bdev 00:09:38.806 CXX test/cpp_headers/blob_bdev.o 00:09:39.065 LINK reconnect 00:09:39.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:39.065 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:39.065 LINK hotplug 00:09:39.328 CC test/env/mem_callbacks/mem_callbacks.o 00:09:39.328 LINK arbitration 00:09:39.328 CXX test/cpp_headers/blobfs_bdev.o 00:09:39.328 LINK nvme_manage 00:09:39.328 CC test/env/vtophys/vtophys.o 00:09:39.328 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:39.587 CC test/env/memory/memory_ut.o 00:09:39.587 CXX test/cpp_headers/blobfs.o 00:09:39.587 LINK vhost_fuzz 00:09:39.587 LINK blobcli 00:09:39.587 LINK vtophys 00:09:39.587 CC test/env/pci/pci_ut.o 00:09:39.587 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:39.587 LINK env_dpdk_post_init 00:09:39.846 CXX test/cpp_headers/blob.o 00:09:39.846 LINK cmb_copy 00:09:39.846 LINK mem_callbacks 00:09:39.846 CC examples/nvme/abort/abort.o 00:09:39.846 CC test/app/jsoncat/jsoncat.o 00:09:39.846 CXX test/cpp_headers/conf.o 00:09:39.846 CC test/app/stub/stub.o 00:09:40.105 CC examples/bdev/hello_world/hello_bdev.o 00:09:40.105 LINK pci_ut 00:09:40.105 CXX test/cpp_headers/config.o 00:09:40.105 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:40.105 LINK jsoncat 00:09:40.105 CXX test/cpp_headers/cpuset.o 00:09:40.105 LINK stub 00:09:40.105 CC examples/bdev/bdevperf/bdevperf.o 00:09:40.362 LINK hello_bdev 00:09:40.362 LINK pmr_persistence 00:09:40.362 CXX test/cpp_headers/crc16.o 00:09:40.362 CXX test/cpp_headers/crc32.o 00:09:40.362 LINK abort 00:09:40.362 CC test/event/event_perf/event_perf.o 00:09:40.621 CC test/nvme/aer/aer.o 00:09:40.621 CC test/nvme/reset/reset.o 00:09:40.621 CXX test/cpp_headers/crc64.o 00:09:40.621 CXX test/cpp_headers/dif.o 00:09:40.621 CC test/nvme/sgl/sgl.o 00:09:40.621 LINK iscsi_fuzz 00:09:40.621 LINK event_perf 00:09:40.621 CC test/event/reactor/reactor.o 00:09:40.879 CXX test/cpp_headers/dma.o 00:09:40.879 LINK reactor 00:09:40.879 CXX test/cpp_headers/endian.o 00:09:40.879 CC test/event/reactor_perf/reactor_perf.o 00:09:40.879 CXX test/cpp_headers/env_dpdk.o 00:09:40.879 LINK aer 00:09:40.879 LINK reset 00:09:40.879 LINK sgl 00:09:40.879 LINK reactor_perf 00:09:41.147 LINK memory_ut 00:09:41.147 CC test/event/app_repeat/app_repeat.o 00:09:41.147 CXX test/cpp_headers/env.o 00:09:41.147 CXX test/cpp_headers/event.o 00:09:41.147 CC test/nvme/e2edp/nvme_dp.o 00:09:41.147 CC test/event/scheduler/scheduler.o 00:09:41.147 LINK bdevperf 00:09:41.147 CC test/nvme/overhead/overhead.o 00:09:41.147 CC test/nvme/err_injection/err_injection.o 00:09:41.147 CC test/nvme/startup/startup.o 00:09:41.147 LINK app_repeat 00:09:41.406 CXX test/cpp_headers/fd_group.o 00:09:41.406 CC test/nvme/reserve/reserve.o 00:09:41.406 CC test/nvme/simple_copy/simple_copy.o 00:09:41.406 LINK err_injection 00:09:41.406 LINK nvme_dp 00:09:41.406 LINK scheduler 00:09:41.406 LINK startup 00:09:41.406 CXX test/cpp_headers/fd.o 00:09:41.406 LINK overhead 00:09:41.695 CC test/nvme/connect_stress/connect_stress.o 00:09:41.695 LINK reserve 00:09:41.695 CC examples/nvmf/nvmf/nvmf.o 00:09:41.695 LINK simple_copy 00:09:41.695 CXX test/cpp_headers/file.o 00:09:41.695 CC test/nvme/boot_partition/boot_partition.o 00:09:41.695 CC test/nvme/compliance/nvme_compliance.o 00:09:41.695 CC test/nvme/fused_ordering/fused_ordering.o 00:09:41.695 LINK connect_stress 00:09:41.954 CC test/nvme/fdp/fdp.o 00:09:41.954 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:41.954 CXX test/cpp_headers/fsdev.o 00:09:41.954 CC test/rpc_client/rpc_client_test.o 00:09:41.954 LINK boot_partition 00:09:41.954 CC test/nvme/cuse/cuse.o 00:09:41.954 LINK nvmf 00:09:41.954 LINK fused_ordering 00:09:41.954 CXX test/cpp_headers/fsdev_module.o 00:09:41.954 LINK doorbell_aers 00:09:42.213 LINK rpc_client_test 00:09:42.213 LINK nvme_compliance 00:09:42.213 CXX test/cpp_headers/ftl.o 00:09:42.213 CC test/accel/dif/dif.o 00:09:42.213 CXX test/cpp_headers/gpt_spec.o 00:09:42.213 CXX test/cpp_headers/hexlify.o 00:09:42.213 CXX test/cpp_headers/histogram_data.o 00:09:42.213 LINK fdp 00:09:42.213 CXX test/cpp_headers/idxd.o 00:09:42.472 CC test/blobfs/mkfs/mkfs.o 00:09:42.472 CXX test/cpp_headers/idxd_spec.o 00:09:42.472 CXX test/cpp_headers/init.o 00:09:42.472 CXX test/cpp_headers/ioat.o 00:09:42.472 CXX test/cpp_headers/ioat_spec.o 00:09:42.472 CC test/lvol/esnap/esnap.o 00:09:42.472 CXX test/cpp_headers/iscsi_spec.o 00:09:42.472 CXX test/cpp_headers/json.o 00:09:42.472 CXX test/cpp_headers/jsonrpc.o 00:09:42.729 CXX test/cpp_headers/keyring.o 00:09:42.729 CXX test/cpp_headers/keyring_module.o 00:09:42.729 CXX test/cpp_headers/likely.o 00:09:42.729 LINK mkfs 00:09:42.729 CXX test/cpp_headers/log.o 00:09:42.729 CXX test/cpp_headers/lvol.o 00:09:42.729 CXX test/cpp_headers/md5.o 00:09:42.729 CXX test/cpp_headers/memory.o 00:09:42.729 CXX test/cpp_headers/mmio.o 00:09:42.729 CXX test/cpp_headers/nbd.o 00:09:42.729 CXX test/cpp_headers/net.o 00:09:42.729 CXX test/cpp_headers/notify.o 00:09:42.988 CXX test/cpp_headers/nvme.o 00:09:42.988 CXX test/cpp_headers/nvme_intel.o 00:09:42.988 CXX test/cpp_headers/nvme_ocssd.o 00:09:42.988 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:42.988 CXX test/cpp_headers/nvme_spec.o 00:09:42.988 CXX test/cpp_headers/nvme_zns.o 00:09:42.988 CXX test/cpp_headers/nvmf_cmd.o 00:09:42.988 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:42.988 LINK dif 00:09:43.246 CXX test/cpp_headers/nvmf.o 00:09:43.246 CXX test/cpp_headers/nvmf_spec.o 00:09:43.246 CXX test/cpp_headers/nvmf_transport.o 00:09:43.246 CXX test/cpp_headers/opal.o 00:09:43.246 CXX test/cpp_headers/opal_spec.o 00:09:43.246 CXX test/cpp_headers/pci_ids.o 00:09:43.246 CXX test/cpp_headers/pipe.o 00:09:43.246 CXX test/cpp_headers/queue.o 00:09:43.504 CXX test/cpp_headers/reduce.o 00:09:43.504 CXX test/cpp_headers/rpc.o 00:09:43.505 CXX test/cpp_headers/scheduler.o 00:09:43.505 CXX test/cpp_headers/scsi.o 00:09:43.505 CXX test/cpp_headers/scsi_spec.o 00:09:43.505 LINK cuse 00:09:43.505 CXX test/cpp_headers/sock.o 00:09:43.505 CXX test/cpp_headers/stdinc.o 00:09:43.505 CXX test/cpp_headers/string.o 00:09:43.762 CXX test/cpp_headers/thread.o 00:09:43.762 CXX test/cpp_headers/trace.o 00:09:43.762 CXX test/cpp_headers/trace_parser.o 00:09:43.762 CXX test/cpp_headers/tree.o 00:09:43.762 CXX test/cpp_headers/ublk.o 00:09:43.762 CXX test/cpp_headers/util.o 00:09:43.762 CXX test/cpp_headers/uuid.o 00:09:43.762 CC test/bdev/bdevio/bdevio.o 00:09:43.762 CXX test/cpp_headers/version.o 00:09:43.762 CXX test/cpp_headers/vfio_user_pci.o 00:09:43.762 CXX test/cpp_headers/vfio_user_spec.o 00:09:43.762 CXX test/cpp_headers/vhost.o 00:09:44.021 CXX test/cpp_headers/vmd.o 00:09:44.021 CXX test/cpp_headers/xor.o 00:09:44.021 CXX test/cpp_headers/zipf.o 00:09:44.279 LINK bdevio 00:09:49.584 LINK esnap 00:09:50.150 00:09:50.150 real 1m48.621s 00:09:50.150 user 9m41.453s 00:09:50.150 sys 1m49.009s 00:09:50.150 ************************************ 00:09:50.150 END TEST make 00:09:50.150 ************************************ 00:09:50.150 22:51:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:50.150 22:51:05 make -- common/autotest_common.sh@10 -- $ set +x 00:09:50.409 22:51:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:50.409 22:51:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:50.409 22:51:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:50.409 22:51:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:50.409 22:51:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:50.409 22:51:06 -- pm/common@44 -- $ pid=5474 00:09:50.409 22:51:06 -- pm/common@50 -- $ kill -TERM 5474 00:09:50.409 22:51:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:50.409 22:51:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:50.409 22:51:06 -- pm/common@44 -- $ pid=5475 00:09:50.409 22:51:06 -- pm/common@50 -- $ kill -TERM 5475 00:09:50.409 22:51:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:50.409 22:51:06 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:50.409 22:51:06 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.409 22:51:06 -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.409 22:51:06 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.409 22:51:06 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.409 22:51:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.409 22:51:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.409 22:51:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.409 22:51:06 -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.409 22:51:06 -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.409 22:51:06 -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.409 22:51:06 -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.409 22:51:06 -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.409 22:51:06 -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.409 22:51:06 -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.409 22:51:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.409 22:51:06 -- scripts/common.sh@344 -- # case "$op" in 00:09:50.409 22:51:06 -- scripts/common.sh@345 -- # : 1 00:09:50.409 22:51:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.409 22:51:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.409 22:51:06 -- scripts/common.sh@365 -- # decimal 1 00:09:50.409 22:51:06 -- scripts/common.sh@353 -- # local d=1 00:09:50.409 22:51:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.409 22:51:06 -- scripts/common.sh@355 -- # echo 1 00:09:50.409 22:51:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.409 22:51:06 -- scripts/common.sh@366 -- # decimal 2 00:09:50.409 22:51:06 -- scripts/common.sh@353 -- # local d=2 00:09:50.409 22:51:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.409 22:51:06 -- scripts/common.sh@355 -- # echo 2 00:09:50.409 22:51:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.409 22:51:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.409 22:51:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.409 22:51:06 -- scripts/common.sh@368 -- # return 0 00:09:50.409 22:51:06 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.409 22:51:06 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.409 --rc genhtml_branch_coverage=1 00:09:50.409 --rc genhtml_function_coverage=1 00:09:50.409 --rc genhtml_legend=1 00:09:50.409 --rc geninfo_all_blocks=1 00:09:50.409 --rc geninfo_unexecuted_blocks=1 00:09:50.409 00:09:50.409 ' 00:09:50.409 22:51:06 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.409 --rc genhtml_branch_coverage=1 00:09:50.409 --rc genhtml_function_coverage=1 00:09:50.409 --rc genhtml_legend=1 00:09:50.409 --rc geninfo_all_blocks=1 00:09:50.409 --rc geninfo_unexecuted_blocks=1 00:09:50.409 00:09:50.409 ' 00:09:50.409 22:51:06 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.409 --rc genhtml_branch_coverage=1 00:09:50.409 --rc genhtml_function_coverage=1 00:09:50.409 --rc genhtml_legend=1 00:09:50.409 --rc geninfo_all_blocks=1 00:09:50.409 --rc geninfo_unexecuted_blocks=1 00:09:50.409 00:09:50.409 ' 00:09:50.409 22:51:06 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.409 --rc genhtml_branch_coverage=1 00:09:50.409 --rc genhtml_function_coverage=1 00:09:50.409 --rc genhtml_legend=1 00:09:50.409 --rc geninfo_all_blocks=1 00:09:50.409 --rc geninfo_unexecuted_blocks=1 00:09:50.409 00:09:50.409 ' 00:09:50.409 22:51:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.409 22:51:06 -- nvmf/common.sh@7 -- # uname -s 00:09:50.409 22:51:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.409 22:51:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.409 22:51:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.409 22:51:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.409 22:51:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.409 22:51:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.409 22:51:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.409 22:51:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.409 22:51:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.409 22:51:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.668 22:51:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fad45d12-5e8f-4f8f-b4ef-09b6c6113c8d 00:09:50.668 22:51:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=fad45d12-5e8f-4f8f-b4ef-09b6c6113c8d 00:09:50.668 22:51:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.668 22:51:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.669 22:51:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:50.669 22:51:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.669 22:51:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.669 22:51:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.669 22:51:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.669 22:51:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.669 22:51:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.669 22:51:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.669 22:51:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.669 22:51:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.669 22:51:06 -- paths/export.sh@5 -- # export PATH 00:09:50.669 22:51:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.669 22:51:06 -- nvmf/common.sh@51 -- # : 0 00:09:50.669 22:51:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.669 22:51:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.669 22:51:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.669 22:51:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.669 22:51:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.669 22:51:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.669 22:51:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.669 22:51:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.669 22:51:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.669 22:51:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:50.669 22:51:06 -- spdk/autotest.sh@32 -- # uname -s 00:09:50.669 22:51:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:50.669 22:51:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:50.669 22:51:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:50.669 22:51:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:50.669 22:51:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:50.669 22:51:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:50.669 22:51:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:50.669 22:51:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:50.669 22:51:06 -- spdk/autotest.sh@48 -- # udevadm_pid=54694 00:09:50.669 22:51:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:50.669 22:51:06 -- pm/common@17 -- # local monitor 00:09:50.669 22:51:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:50.669 22:51:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:50.669 22:51:06 -- pm/common@25 -- # sleep 1 00:09:50.669 22:51:06 -- pm/common@21 -- # date +%s 00:09:50.669 22:51:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:50.669 22:51:06 -- pm/common@21 -- # date +%s 00:09:50.669 22:51:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733784666 00:09:50.669 22:51:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733784666 00:09:50.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733784666_collect-cpu-load.pm.log 00:09:50.669 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733784666_collect-vmstat.pm.log 00:09:51.607 22:51:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:51.607 22:51:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:51.607 22:51:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.607 22:51:07 -- common/autotest_common.sh@10 -- # set +x 00:09:51.607 22:51:07 -- spdk/autotest.sh@59 -- # create_test_list 00:09:51.607 22:51:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:51.607 22:51:07 -- common/autotest_common.sh@10 -- # set +x 00:09:51.607 22:51:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:51.607 22:51:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:51.607 22:51:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:51.607 22:51:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:51.607 22:51:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:51.607 22:51:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:51.866 22:51:07 -- common/autotest_common.sh@1457 -- # uname 00:09:51.866 22:51:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:51.866 22:51:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:51.866 22:51:07 -- common/autotest_common.sh@1477 -- # uname 00:09:51.866 22:51:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:51.866 22:51:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:51.866 22:51:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:51.866 lcov: LCOV version 1.15 00:09:51.866 22:51:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:10.021 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:10.021 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:28.183 22:51:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:28.183 22:51:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.183 22:51:42 -- common/autotest_common.sh@10 -- # set +x 00:10:28.183 22:51:42 -- spdk/autotest.sh@78 -- # rm -f 00:10:28.183 22:51:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:28.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:28.183 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:28.183 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:28.183 22:51:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:28.183 22:51:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:28.183 22:51:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:28.183 22:51:43 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:28.183 22:51:43 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:28.183 22:51:43 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:28.183 22:51:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:28.183 22:51:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:28.183 22:51:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:28.183 22:51:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:28.183 22:51:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:28.183 22:51:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:28.183 22:51:43 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:28.183 22:51:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:28.183 22:51:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:28.183 22:51:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:28.183 22:51:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:28.183 22:51:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:10:28.183 22:51:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:10:28.183 22:51:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:28.183 22:51:43 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:10:28.183 22:51:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:10:28.183 22:51:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:28.183 22:51:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:28.183 22:51:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:28.184 22:51:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:28.184 22:51:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:28.184 22:51:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:28.184 22:51:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:28.184 22:51:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:28.184 No valid GPT data, bailing 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # pt= 00:10:28.184 22:51:43 -- scripts/common.sh@395 -- # return 1 00:10:28.184 22:51:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:28.184 1+0 records in 00:10:28.184 1+0 records out 00:10:28.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886431 s, 118 MB/s 00:10:28.184 22:51:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:28.184 22:51:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:28.184 22:51:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:28.184 22:51:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:28.184 22:51:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:28.184 No valid GPT data, bailing 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # pt= 00:10:28.184 22:51:43 -- scripts/common.sh@395 -- # return 1 00:10:28.184 22:51:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:28.184 1+0 records in 00:10:28.184 1+0 records out 00:10:28.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472251 s, 222 MB/s 00:10:28.184 22:51:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:28.184 22:51:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:28.184 22:51:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:10:28.184 22:51:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:10:28.184 22:51:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:28.184 No valid GPT data, bailing 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # pt= 00:10:28.184 22:51:43 -- scripts/common.sh@395 -- # return 1 00:10:28.184 22:51:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:28.184 1+0 records in 00:10:28.184 1+0 records out 00:10:28.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451533 s, 232 MB/s 00:10:28.184 22:51:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:28.184 22:51:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:28.184 22:51:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:10:28.184 22:51:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:10:28.184 22:51:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:28.184 No valid GPT data, bailing 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:28.184 22:51:43 -- scripts/common.sh@394 -- # pt= 00:10:28.184 22:51:43 -- scripts/common.sh@395 -- # return 1 00:10:28.184 22:51:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:28.184 1+0 records in 00:10:28.184 1+0 records out 00:10:28.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767111 s, 137 MB/s 00:10:28.184 22:51:43 -- spdk/autotest.sh@105 -- # sync 00:10:28.184 22:51:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:28.184 22:51:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:28.184 22:51:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:30.088 22:51:45 -- spdk/autotest.sh@111 -- # uname -s 00:10:30.088 22:51:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:30.088 22:51:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:30.088 22:51:45 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:31.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:31.027 Hugepages 00:10:31.027 node hugesize free / total 00:10:31.027 node0 1048576kB 0 / 0 00:10:31.027 node0 2048kB 0 / 0 00:10:31.027 00:10:31.027 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:31.027 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:31.027 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:31.286 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:31.286 22:51:46 -- spdk/autotest.sh@117 -- # uname -s 00:10:31.286 22:51:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:31.286 22:51:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:31.286 22:51:46 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:31.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.112 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.112 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.112 22:51:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:10:33.492 22:51:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:10:33.492 22:51:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:10:33.492 22:51:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:33.492 22:51:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:33.492 22:51:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:33.492 22:51:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:33.492 22:51:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:33.492 22:51:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:33.492 22:51:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:33.492 22:51:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:10:33.492 22:51:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:33.492 22:51:48 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:33.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.776 Waiting for block devices as requested 00:10:33.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.036 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.036 22:51:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:34.036 22:51:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:10:34.036 22:51:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:34.036 22:51:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:34.036 22:51:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:34.036 22:51:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1543 -- # continue 00:10:34.036 22:51:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:34.036 22:51:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:34.036 22:51:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:10:34.036 22:51:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:34.036 22:51:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:34.036 22:51:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:34.036 22:51:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:34.036 22:51:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:34.036 22:51:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:34.036 22:51:49 -- common/autotest_common.sh@1543 -- # continue 00:10:34.036 22:51:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:34.036 22:51:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.036 22:51:49 -- common/autotest_common.sh@10 -- # set +x 00:10:34.036 22:51:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:34.036 22:51:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.036 22:51:49 -- common/autotest_common.sh@10 -- # set +x 00:10:34.036 22:51:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:34.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.973 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.973 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.232 22:51:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:35.232 22:51:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.232 22:51:50 -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 22:51:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:35.232 22:51:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:35.232 22:51:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:35.232 22:51:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:35.232 22:51:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:35.232 22:51:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:35.232 22:51:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:35.232 22:51:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:35.232 22:51:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:35.232 22:51:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:35.232 22:51:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:35.232 22:51:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:35.232 22:51:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:35.232 22:51:51 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:10:35.232 22:51:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:35.232 22:51:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:35.232 22:51:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:35.232 22:51:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:35.232 22:51:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:35.232 22:51:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:35.232 22:51:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:35.232 22:51:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:35.232 22:51:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:35.232 22:51:51 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:10:35.232 22:51:51 -- common/autotest_common.sh@1572 -- # return 0 00:10:35.232 22:51:51 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:10:35.232 22:51:51 -- common/autotest_common.sh@1580 -- # return 0 00:10:35.232 22:51:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:35.232 22:51:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:35.232 22:51:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:35.232 22:51:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:35.232 22:51:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:35.232 22:51:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.232 22:51:51 -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 22:51:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:35.232 22:51:51 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:35.232 22:51:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.232 22:51:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.232 22:51:51 -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 ************************************ 00:10:35.232 START TEST env 00:10:35.232 ************************************ 00:10:35.233 22:51:51 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:35.493 * Looking for test storage... 00:10:35.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.493 22:51:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.493 22:51:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.493 22:51:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.493 22:51:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.493 22:51:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.493 22:51:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.493 22:51:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.493 22:51:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.493 22:51:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.493 22:51:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.493 22:51:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.493 22:51:51 env -- scripts/common.sh@344 -- # case "$op" in 00:10:35.493 22:51:51 env -- scripts/common.sh@345 -- # : 1 00:10:35.493 22:51:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.493 22:51:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.493 22:51:51 env -- scripts/common.sh@365 -- # decimal 1 00:10:35.493 22:51:51 env -- scripts/common.sh@353 -- # local d=1 00:10:35.493 22:51:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.493 22:51:51 env -- scripts/common.sh@355 -- # echo 1 00:10:35.493 22:51:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.493 22:51:51 env -- scripts/common.sh@366 -- # decimal 2 00:10:35.493 22:51:51 env -- scripts/common.sh@353 -- # local d=2 00:10:35.493 22:51:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.493 22:51:51 env -- scripts/common.sh@355 -- # echo 2 00:10:35.493 22:51:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.493 22:51:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.493 22:51:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.493 22:51:51 env -- scripts/common.sh@368 -- # return 0 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.493 --rc genhtml_branch_coverage=1 00:10:35.493 --rc genhtml_function_coverage=1 00:10:35.493 --rc genhtml_legend=1 00:10:35.493 --rc geninfo_all_blocks=1 00:10:35.493 --rc geninfo_unexecuted_blocks=1 00:10:35.493 00:10:35.493 ' 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.493 --rc genhtml_branch_coverage=1 00:10:35.493 --rc genhtml_function_coverage=1 00:10:35.493 --rc genhtml_legend=1 00:10:35.493 --rc geninfo_all_blocks=1 00:10:35.493 --rc geninfo_unexecuted_blocks=1 00:10:35.493 00:10:35.493 ' 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.493 --rc genhtml_branch_coverage=1 00:10:35.493 --rc genhtml_function_coverage=1 00:10:35.493 --rc genhtml_legend=1 00:10:35.493 --rc geninfo_all_blocks=1 00:10:35.493 --rc geninfo_unexecuted_blocks=1 00:10:35.493 00:10:35.493 ' 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.493 --rc genhtml_branch_coverage=1 00:10:35.493 --rc genhtml_function_coverage=1 00:10:35.493 --rc genhtml_legend=1 00:10:35.493 --rc geninfo_all_blocks=1 00:10:35.493 --rc geninfo_unexecuted_blocks=1 00:10:35.493 00:10:35.493 ' 00:10:35.493 22:51:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.493 22:51:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.493 22:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:35.493 ************************************ 00:10:35.493 START TEST env_memory 00:10:35.493 ************************************ 00:10:35.493 22:51:51 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:35.753 00:10:35.753 00:10:35.753 CUnit - A unit testing framework for C - Version 2.1-3 00:10:35.753 http://cunit.sourceforge.net/ 00:10:35.753 00:10:35.753 00:10:35.753 Suite: memory 00:10:35.753 Test: alloc and free memory map ...[2024-12-09 22:51:51.395687] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:35.753 passed 00:10:35.753 Test: mem map translation ...[2024-12-09 22:51:51.439414] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:35.753 [2024-12-09 22:51:51.439571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:35.753 [2024-12-09 22:51:51.439656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:35.753 [2024-12-09 22:51:51.439700] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:35.753 passed 00:10:35.753 Test: mem map registration ...[2024-12-09 22:51:51.505142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:35.753 [2024-12-09 22:51:51.505212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:35.753 passed 00:10:35.753 Test: mem map adjacent registrations ...passed 00:10:35.753 00:10:35.753 Run Summary: Type Total Ran Passed Failed Inactive 00:10:35.753 suites 1 1 n/a 0 0 00:10:35.753 tests 4 4 4 0 0 00:10:35.753 asserts 152 152 152 0 n/a 00:10:35.753 00:10:35.753 Elapsed time = 0.238 seconds 00:10:36.013 ************************************ 00:10:36.013 END TEST env_memory 00:10:36.013 00:10:36.013 real 0m0.290s 00:10:36.013 user 0m0.250s 00:10:36.013 sys 0m0.032s 00:10:36.013 22:51:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.013 22:51:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:36.013 ************************************ 00:10:36.013 22:51:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:36.013 22:51:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.014 22:51:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.014 22:51:51 env -- common/autotest_common.sh@10 -- # set +x 00:10:36.014 ************************************ 00:10:36.014 START TEST env_vtophys 00:10:36.014 ************************************ 00:10:36.014 22:51:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:36.014 EAL: lib.eal log level changed from notice to debug 00:10:36.014 EAL: Detected lcore 0 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 1 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 2 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 3 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 4 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 5 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 6 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 7 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 8 as core 0 on socket 0 00:10:36.014 EAL: Detected lcore 9 as core 0 on socket 0 00:10:36.014 EAL: Maximum logical cores by configuration: 128 00:10:36.014 EAL: Detected CPU lcores: 10 00:10:36.014 EAL: Detected NUMA nodes: 1 00:10:36.014 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:36.014 EAL: Detected shared linkage of DPDK 00:10:36.014 EAL: No shared files mode enabled, IPC will be disabled 00:10:36.014 EAL: Selected IOVA mode 'PA' 00:10:36.014 EAL: Probing VFIO support... 00:10:36.014 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:36.014 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:36.014 EAL: Ask a virtual area of 0x2e000 bytes 00:10:36.014 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:36.014 EAL: Setting up physically contiguous memory... 00:10:36.014 EAL: Setting maximum number of open files to 524288 00:10:36.014 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:36.014 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:36.014 EAL: Ask a virtual area of 0x61000 bytes 00:10:36.014 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:36.014 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:36.014 EAL: Ask a virtual area of 0x400000000 bytes 00:10:36.014 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:36.014 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:36.014 EAL: Ask a virtual area of 0x61000 bytes 00:10:36.014 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:36.014 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:36.014 EAL: Ask a virtual area of 0x400000000 bytes 00:10:36.014 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:36.014 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:36.014 EAL: Ask a virtual area of 0x61000 bytes 00:10:36.014 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:36.014 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:36.014 EAL: Ask a virtual area of 0x400000000 bytes 00:10:36.014 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:36.014 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:36.014 EAL: Ask a virtual area of 0x61000 bytes 00:10:36.014 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:36.014 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:36.014 EAL: Ask a virtual area of 0x400000000 bytes 00:10:36.014 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:36.014 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:36.014 EAL: Hugepages will be freed exactly as allocated. 00:10:36.014 EAL: No shared files mode enabled, IPC is disabled 00:10:36.014 EAL: No shared files mode enabled, IPC is disabled 00:10:36.014 EAL: TSC frequency is ~2290000 KHz 00:10:36.014 EAL: Main lcore 0 is ready (tid=7f3de0ebaa40;cpuset=[0]) 00:10:36.014 EAL: Trying to obtain current memory policy. 00:10:36.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.014 EAL: Restoring previous memory policy: 0 00:10:36.014 EAL: request: mp_malloc_sync 00:10:36.014 EAL: No shared files mode enabled, IPC is disabled 00:10:36.014 EAL: Heap on socket 0 was expanded by 2MB 00:10:36.014 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:36.014 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:36.014 EAL: Mem event callback 'spdk:(nil)' registered 00:10:36.014 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:36.274 00:10:36.274 00:10:36.274 CUnit - A unit testing framework for C - Version 2.1-3 00:10:36.274 http://cunit.sourceforge.net/ 00:10:36.274 00:10:36.274 00:10:36.274 Suite: components_suite 00:10:36.533 Test: vtophys_malloc_test ...passed 00:10:36.533 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:36.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.533 EAL: Restoring previous memory policy: 4 00:10:36.533 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.533 EAL: request: mp_malloc_sync 00:10:36.533 EAL: No shared files mode enabled, IPC is disabled 00:10:36.533 EAL: Heap on socket 0 was expanded by 4MB 00:10:36.533 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.533 EAL: request: mp_malloc_sync 00:10:36.533 EAL: No shared files mode enabled, IPC is disabled 00:10:36.533 EAL: Heap on socket 0 was shrunk by 4MB 00:10:36.533 EAL: Trying to obtain current memory policy. 00:10:36.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.533 EAL: Restoring previous memory policy: 4 00:10:36.533 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.533 EAL: request: mp_malloc_sync 00:10:36.533 EAL: No shared files mode enabled, IPC is disabled 00:10:36.533 EAL: Heap on socket 0 was expanded by 6MB 00:10:36.533 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.533 EAL: request: mp_malloc_sync 00:10:36.533 EAL: No shared files mode enabled, IPC is disabled 00:10:36.533 EAL: Heap on socket 0 was shrunk by 6MB 00:10:36.533 EAL: Trying to obtain current memory policy. 00:10:36.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.533 EAL: Restoring previous memory policy: 4 00:10:36.533 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.533 EAL: request: mp_malloc_sync 00:10:36.533 EAL: No shared files mode enabled, IPC is disabled 00:10:36.533 EAL: Heap on socket 0 was expanded by 10MB 00:10:36.533 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.534 EAL: request: mp_malloc_sync 00:10:36.534 EAL: No shared files mode enabled, IPC is disabled 00:10:36.534 EAL: Heap on socket 0 was shrunk by 10MB 00:10:36.534 EAL: Trying to obtain current memory policy. 00:10:36.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.534 EAL: Restoring previous memory policy: 4 00:10:36.534 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.534 EAL: request: mp_malloc_sync 00:10:36.534 EAL: No shared files mode enabled, IPC is disabled 00:10:36.534 EAL: Heap on socket 0 was expanded by 18MB 00:10:36.534 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.534 EAL: request: mp_malloc_sync 00:10:36.534 EAL: No shared files mode enabled, IPC is disabled 00:10:36.534 EAL: Heap on socket 0 was shrunk by 18MB 00:10:36.534 EAL: Trying to obtain current memory policy. 00:10:36.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.793 EAL: Restoring previous memory policy: 4 00:10:36.793 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.793 EAL: request: mp_malloc_sync 00:10:36.793 EAL: No shared files mode enabled, IPC is disabled 00:10:36.793 EAL: Heap on socket 0 was expanded by 34MB 00:10:36.793 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.793 EAL: request: mp_malloc_sync 00:10:36.793 EAL: No shared files mode enabled, IPC is disabled 00:10:36.793 EAL: Heap on socket 0 was shrunk by 34MB 00:10:36.793 EAL: Trying to obtain current memory policy. 00:10:36.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:36.793 EAL: Restoring previous memory policy: 4 00:10:36.793 EAL: Calling mem event callback 'spdk:(nil)' 00:10:36.793 EAL: request: mp_malloc_sync 00:10:36.793 EAL: No shared files mode enabled, IPC is disabled 00:10:36.793 EAL: Heap on socket 0 was expanded by 66MB 00:10:37.052 EAL: Calling mem event callback 'spdk:(nil)' 00:10:37.052 EAL: request: mp_malloc_sync 00:10:37.052 EAL: No shared files mode enabled, IPC is disabled 00:10:37.052 EAL: Heap on socket 0 was shrunk by 66MB 00:10:37.052 EAL: Trying to obtain current memory policy. 00:10:37.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:37.052 EAL: Restoring previous memory policy: 4 00:10:37.052 EAL: Calling mem event callback 'spdk:(nil)' 00:10:37.052 EAL: request: mp_malloc_sync 00:10:37.052 EAL: No shared files mode enabled, IPC is disabled 00:10:37.052 EAL: Heap on socket 0 was expanded by 130MB 00:10:37.311 EAL: Calling mem event callback 'spdk:(nil)' 00:10:37.311 EAL: request: mp_malloc_sync 00:10:37.311 EAL: No shared files mode enabled, IPC is disabled 00:10:37.311 EAL: Heap on socket 0 was shrunk by 130MB 00:10:37.571 EAL: Trying to obtain current memory policy. 00:10:37.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:37.571 EAL: Restoring previous memory policy: 4 00:10:37.571 EAL: Calling mem event callback 'spdk:(nil)' 00:10:37.571 EAL: request: mp_malloc_sync 00:10:37.571 EAL: No shared files mode enabled, IPC is disabled 00:10:37.571 EAL: Heap on socket 0 was expanded by 258MB 00:10:38.139 EAL: Calling mem event callback 'spdk:(nil)' 00:10:38.139 EAL: request: mp_malloc_sync 00:10:38.139 EAL: No shared files mode enabled, IPC is disabled 00:10:38.139 EAL: Heap on socket 0 was shrunk by 258MB 00:10:38.399 EAL: Trying to obtain current memory policy. 00:10:38.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:38.657 EAL: Restoring previous memory policy: 4 00:10:38.657 EAL: Calling mem event callback 'spdk:(nil)' 00:10:38.657 EAL: request: mp_malloc_sync 00:10:38.657 EAL: No shared files mode enabled, IPC is disabled 00:10:38.657 EAL: Heap on socket 0 was expanded by 514MB 00:10:39.592 EAL: Calling mem event callback 'spdk:(nil)' 00:10:39.592 EAL: request: mp_malloc_sync 00:10:39.592 EAL: No shared files mode enabled, IPC is disabled 00:10:39.592 EAL: Heap on socket 0 was shrunk by 514MB 00:10:40.527 EAL: Trying to obtain current memory policy. 00:10:40.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:40.787 EAL: Restoring previous memory policy: 4 00:10:40.787 EAL: Calling mem event callback 'spdk:(nil)' 00:10:40.787 EAL: request: mp_malloc_sync 00:10:40.787 EAL: No shared files mode enabled, IPC is disabled 00:10:40.787 EAL: Heap on socket 0 was expanded by 1026MB 00:10:42.695 EAL: Calling mem event callback 'spdk:(nil)' 00:10:42.695 EAL: request: mp_malloc_sync 00:10:42.695 EAL: No shared files mode enabled, IPC is disabled 00:10:42.695 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:44.600 passed 00:10:44.600 00:10:44.600 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.600 suites 1 1 n/a 0 0 00:10:44.600 tests 2 2 2 0 0 00:10:44.600 asserts 5698 5698 5698 0 n/a 00:10:44.600 00:10:44.600 Elapsed time = 8.192 seconds 00:10:44.600 EAL: Calling mem event callback 'spdk:(nil)' 00:10:44.600 EAL: request: mp_malloc_sync 00:10:44.600 EAL: No shared files mode enabled, IPC is disabled 00:10:44.600 EAL: Heap on socket 0 was shrunk by 2MB 00:10:44.600 EAL: No shared files mode enabled, IPC is disabled 00:10:44.600 EAL: No shared files mode enabled, IPC is disabled 00:10:44.600 EAL: No shared files mode enabled, IPC is disabled 00:10:44.600 00:10:44.600 real 0m8.499s 00:10:44.600 user 0m7.540s 00:10:44.600 sys 0m0.804s 00:10:44.600 ************************************ 00:10:44.600 END TEST env_vtophys 00:10:44.600 ************************************ 00:10:44.600 22:52:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.600 22:52:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:44.600 22:52:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:44.600 22:52:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.600 22:52:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.600 22:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.600 ************************************ 00:10:44.600 START TEST env_pci 00:10:44.600 ************************************ 00:10:44.600 22:52:00 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:44.600 00:10:44.600 00:10:44.600 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.600 http://cunit.sourceforge.net/ 00:10:44.600 00:10:44.601 00:10:44.601 Suite: pci 00:10:44.601 Test: pci_hook ...[2024-12-09 22:52:00.276760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57049 has claimed it 00:10:44.601 EAL: Cannot find device (10000:00:01.0) 00:10:44.601 passed 00:10:44.601 00:10:44.601 Run Summary: Type Total Ran Passed Failed Inactive 00:10:44.601 suites 1 1 n/a 0 0 00:10:44.601 tests 1 1 1 0 0 00:10:44.601 asserts 25 25 25 0 n/a 00:10:44.601 00:10:44.601 Elapsed time = 0.004 seconds 00:10:44.601 EAL: Failed to attach device on primary process 00:10:44.601 00:10:44.601 real 0m0.095s 00:10:44.601 user 0m0.042s 00:10:44.601 sys 0m0.051s 00:10:44.601 22:52:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.601 22:52:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:44.601 ************************************ 00:10:44.601 END TEST env_pci 00:10:44.601 ************************************ 00:10:44.601 22:52:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:44.601 22:52:00 env -- env/env.sh@15 -- # uname 00:10:44.601 22:52:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:44.601 22:52:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:44.601 22:52:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:44.601 22:52:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.601 22:52:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.601 22:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:10:44.601 ************************************ 00:10:44.601 START TEST env_dpdk_post_init 00:10:44.601 ************************************ 00:10:44.601 22:52:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:44.860 EAL: Detected CPU lcores: 10 00:10:44.860 EAL: Detected NUMA nodes: 1 00:10:44.860 EAL: Detected shared linkage of DPDK 00:10:44.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:44.860 EAL: Selected IOVA mode 'PA' 00:10:44.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:44.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:44.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:44.860 Starting DPDK initialization... 00:10:44.860 Starting SPDK post initialization... 00:10:44.860 SPDK NVMe probe 00:10:44.860 Attaching to 0000:00:10.0 00:10:44.860 Attaching to 0000:00:11.0 00:10:44.860 Attached to 0000:00:10.0 00:10:44.860 Attached to 0000:00:11.0 00:10:44.860 Cleaning up... 00:10:44.860 00:10:44.860 real 0m0.298s 00:10:44.860 user 0m0.094s 00:10:44.860 sys 0m0.104s 00:10:44.860 ************************************ 00:10:44.860 END TEST env_dpdk_post_init 00:10:44.860 ************************************ 00:10:44.860 22:52:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.860 22:52:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:45.141 22:52:00 env -- env/env.sh@26 -- # uname 00:10:45.141 22:52:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:45.141 22:52:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:45.141 22:52:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.141 22:52:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.141 22:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.141 ************************************ 00:10:45.141 START TEST env_mem_callbacks 00:10:45.141 ************************************ 00:10:45.141 22:52:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:45.141 EAL: Detected CPU lcores: 10 00:10:45.141 EAL: Detected NUMA nodes: 1 00:10:45.141 EAL: Detected shared linkage of DPDK 00:10:45.141 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:45.141 EAL: Selected IOVA mode 'PA' 00:10:45.141 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:45.141 00:10:45.141 00:10:45.141 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.141 http://cunit.sourceforge.net/ 00:10:45.141 00:10:45.141 00:10:45.141 Suite: memory 00:10:45.141 Test: test ... 00:10:45.141 register 0x200000200000 2097152 00:10:45.141 malloc 3145728 00:10:45.141 register 0x200000400000 4194304 00:10:45.141 buf 0x2000004fffc0 len 3145728 PASSED 00:10:45.141 malloc 64 00:10:45.141 buf 0x2000004ffec0 len 64 PASSED 00:10:45.141 malloc 4194304 00:10:45.141 register 0x200000800000 6291456 00:10:45.141 buf 0x2000009fffc0 len 4194304 PASSED 00:10:45.141 free 0x2000004fffc0 3145728 00:10:45.141 free 0x2000004ffec0 64 00:10:45.141 unregister 0x200000400000 4194304 PASSED 00:10:45.141 free 0x2000009fffc0 4194304 00:10:45.141 unregister 0x200000800000 6291456 PASSED 00:10:45.141 malloc 8388608 00:10:45.141 register 0x200000400000 10485760 00:10:45.400 buf 0x2000005fffc0 len 8388608 PASSED 00:10:45.400 free 0x2000005fffc0 8388608 00:10:45.400 unregister 0x200000400000 10485760 PASSED 00:10:45.400 passed 00:10:45.400 00:10:45.400 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.400 suites 1 1 n/a 0 0 00:10:45.400 tests 1 1 1 0 0 00:10:45.400 asserts 15 15 15 0 n/a 00:10:45.400 00:10:45.400 Elapsed time = 0.084 seconds 00:10:45.400 00:10:45.400 real 0m0.284s 00:10:45.400 user 0m0.109s 00:10:45.400 sys 0m0.071s 00:10:45.400 22:52:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.400 22:52:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:45.400 ************************************ 00:10:45.400 END TEST env_mem_callbacks 00:10:45.400 ************************************ 00:10:45.400 00:10:45.400 real 0m10.030s 00:10:45.400 user 0m8.254s 00:10:45.400 sys 0m1.412s 00:10:45.400 ************************************ 00:10:45.400 END TEST env 00:10:45.400 ************************************ 00:10:45.400 22:52:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.400 22:52:01 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.400 22:52:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:45.400 22:52:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.400 22:52:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.400 22:52:01 -- common/autotest_common.sh@10 -- # set +x 00:10:45.400 ************************************ 00:10:45.401 START TEST rpc 00:10:45.401 ************************************ 00:10:45.401 22:52:01 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:45.660 * Looking for test storage... 00:10:45.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:45.660 22:52:01 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.660 22:52:01 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.660 22:52:01 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.660 22:52:01 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.660 22:52:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.660 22:52:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.660 22:52:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.660 22:52:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.660 22:52:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.660 22:52:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:45.660 22:52:01 rpc -- scripts/common.sh@345 -- # : 1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.660 22:52:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.660 22:52:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@353 -- # local d=1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.660 22:52:01 rpc -- scripts/common.sh@355 -- # echo 1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.660 22:52:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@353 -- # local d=2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.660 22:52:01 rpc -- scripts/common.sh@355 -- # echo 2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.660 22:52:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.660 22:52:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.660 22:52:01 rpc -- scripts/common.sh@368 -- # return 0 00:10:45.660 22:52:01 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.660 22:52:01 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.660 --rc genhtml_branch_coverage=1 00:10:45.660 --rc genhtml_function_coverage=1 00:10:45.660 --rc genhtml_legend=1 00:10:45.660 --rc geninfo_all_blocks=1 00:10:45.660 --rc geninfo_unexecuted_blocks=1 00:10:45.660 00:10:45.660 ' 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.661 --rc genhtml_branch_coverage=1 00:10:45.661 --rc genhtml_function_coverage=1 00:10:45.661 --rc genhtml_legend=1 00:10:45.661 --rc geninfo_all_blocks=1 00:10:45.661 --rc geninfo_unexecuted_blocks=1 00:10:45.661 00:10:45.661 ' 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.661 --rc genhtml_branch_coverage=1 00:10:45.661 --rc genhtml_function_coverage=1 00:10:45.661 --rc genhtml_legend=1 00:10:45.661 --rc geninfo_all_blocks=1 00:10:45.661 --rc geninfo_unexecuted_blocks=1 00:10:45.661 00:10:45.661 ' 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.661 --rc genhtml_branch_coverage=1 00:10:45.661 --rc genhtml_function_coverage=1 00:10:45.661 --rc genhtml_legend=1 00:10:45.661 --rc geninfo_all_blocks=1 00:10:45.661 --rc geninfo_unexecuted_blocks=1 00:10:45.661 00:10:45.661 ' 00:10:45.661 22:52:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57176 00:10:45.661 22:52:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:45.661 22:52:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:45.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.661 22:52:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57176 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@835 -- # '[' -z 57176 ']' 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.661 22:52:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.661 [2024-12-09 22:52:01.483100] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:10:45.661 [2024-12-09 22:52:01.483242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57176 ] 00:10:45.920 [2024-12-09 22:52:01.640390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.920 [2024-12-09 22:52:01.771273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:45.920 [2024-12-09 22:52:01.771339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57176' to capture a snapshot of events at runtime. 00:10:45.920 [2024-12-09 22:52:01.771352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.920 [2024-12-09 22:52:01.771365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.920 [2024-12-09 22:52:01.771375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57176 for offline analysis/debug. 00:10:45.920 [2024-12-09 22:52:01.772757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.857 22:52:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.857 22:52:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:46.857 22:52:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:46.857 22:52:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:46.857 22:52:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:46.857 22:52:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:46.857 22:52:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.857 22:52:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.857 22:52:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.857 ************************************ 00:10:46.857 START TEST rpc_integrity 00:10:46.857 ************************************ 00:10:46.857 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:47.117 { 00:10:47.117 "name": "Malloc0", 00:10:47.117 "aliases": [ 00:10:47.117 "1c202eb8-00de-475f-85fe-d6fea07e645b" 00:10:47.117 ], 00:10:47.117 "product_name": "Malloc disk", 00:10:47.117 "block_size": 512, 00:10:47.117 "num_blocks": 16384, 00:10:47.117 "uuid": "1c202eb8-00de-475f-85fe-d6fea07e645b", 00:10:47.117 "assigned_rate_limits": { 00:10:47.117 "rw_ios_per_sec": 0, 00:10:47.117 "rw_mbytes_per_sec": 0, 00:10:47.117 "r_mbytes_per_sec": 0, 00:10:47.117 "w_mbytes_per_sec": 0 00:10:47.117 }, 00:10:47.117 "claimed": false, 00:10:47.117 "zoned": false, 00:10:47.117 "supported_io_types": { 00:10:47.117 "read": true, 00:10:47.117 "write": true, 00:10:47.117 "unmap": true, 00:10:47.117 "flush": true, 00:10:47.117 "reset": true, 00:10:47.117 "nvme_admin": false, 00:10:47.117 "nvme_io": false, 00:10:47.117 "nvme_io_md": false, 00:10:47.117 "write_zeroes": true, 00:10:47.117 "zcopy": true, 00:10:47.117 "get_zone_info": false, 00:10:47.117 "zone_management": false, 00:10:47.117 "zone_append": false, 00:10:47.117 "compare": false, 00:10:47.117 "compare_and_write": false, 00:10:47.117 "abort": true, 00:10:47.117 "seek_hole": false, 00:10:47.117 "seek_data": false, 00:10:47.117 "copy": true, 00:10:47.117 "nvme_iov_md": false 00:10:47.117 }, 00:10:47.117 "memory_domains": [ 00:10:47.117 { 00:10:47.117 "dma_device_id": "system", 00:10:47.117 "dma_device_type": 1 00:10:47.117 }, 00:10:47.117 { 00:10:47.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.117 "dma_device_type": 2 00:10:47.117 } 00:10:47.117 ], 00:10:47.117 "driver_specific": {} 00:10:47.117 } 00:10:47.117 ]' 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 [2024-12-09 22:52:02.871923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:47.117 [2024-12-09 22:52:02.872006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.117 [2024-12-09 22:52:02.872041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:47.117 [2024-12-09 22:52:02.872061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.117 [2024-12-09 22:52:02.874781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.117 [2024-12-09 22:52:02.874834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:47.117 Passthru0 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.117 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.117 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:47.117 { 00:10:47.117 "name": "Malloc0", 00:10:47.117 "aliases": [ 00:10:47.117 "1c202eb8-00de-475f-85fe-d6fea07e645b" 00:10:47.117 ], 00:10:47.117 "product_name": "Malloc disk", 00:10:47.117 "block_size": 512, 00:10:47.117 "num_blocks": 16384, 00:10:47.117 "uuid": "1c202eb8-00de-475f-85fe-d6fea07e645b", 00:10:47.117 "assigned_rate_limits": { 00:10:47.117 "rw_ios_per_sec": 0, 00:10:47.117 "rw_mbytes_per_sec": 0, 00:10:47.117 "r_mbytes_per_sec": 0, 00:10:47.117 "w_mbytes_per_sec": 0 00:10:47.117 }, 00:10:47.117 "claimed": true, 00:10:47.117 "claim_type": "exclusive_write", 00:10:47.117 "zoned": false, 00:10:47.117 "supported_io_types": { 00:10:47.117 "read": true, 00:10:47.117 "write": true, 00:10:47.117 "unmap": true, 00:10:47.117 "flush": true, 00:10:47.117 "reset": true, 00:10:47.117 "nvme_admin": false, 00:10:47.117 "nvme_io": false, 00:10:47.117 "nvme_io_md": false, 00:10:47.117 "write_zeroes": true, 00:10:47.117 "zcopy": true, 00:10:47.117 "get_zone_info": false, 00:10:47.117 "zone_management": false, 00:10:47.117 "zone_append": false, 00:10:47.117 "compare": false, 00:10:47.117 "compare_and_write": false, 00:10:47.117 "abort": true, 00:10:47.117 "seek_hole": false, 00:10:47.117 "seek_data": false, 00:10:47.117 "copy": true, 00:10:47.117 "nvme_iov_md": false 00:10:47.117 }, 00:10:47.117 "memory_domains": [ 00:10:47.117 { 00:10:47.117 "dma_device_id": "system", 00:10:47.117 "dma_device_type": 1 00:10:47.117 }, 00:10:47.117 { 00:10:47.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.117 "dma_device_type": 2 00:10:47.117 } 00:10:47.117 ], 00:10:47.117 "driver_specific": {} 00:10:47.117 }, 00:10:47.117 { 00:10:47.117 "name": "Passthru0", 00:10:47.117 "aliases": [ 00:10:47.117 "eae89bba-cb6d-591e-999f-4a85046dbe40" 00:10:47.117 ], 00:10:47.117 "product_name": "passthru", 00:10:47.117 "block_size": 512, 00:10:47.117 "num_blocks": 16384, 00:10:47.117 "uuid": "eae89bba-cb6d-591e-999f-4a85046dbe40", 00:10:47.117 "assigned_rate_limits": { 00:10:47.118 "rw_ios_per_sec": 0, 00:10:47.118 "rw_mbytes_per_sec": 0, 00:10:47.118 "r_mbytes_per_sec": 0, 00:10:47.118 "w_mbytes_per_sec": 0 00:10:47.118 }, 00:10:47.118 "claimed": false, 00:10:47.118 "zoned": false, 00:10:47.118 "supported_io_types": { 00:10:47.118 "read": true, 00:10:47.118 "write": true, 00:10:47.118 "unmap": true, 00:10:47.118 "flush": true, 00:10:47.118 "reset": true, 00:10:47.118 "nvme_admin": false, 00:10:47.118 "nvme_io": false, 00:10:47.118 "nvme_io_md": false, 00:10:47.118 "write_zeroes": true, 00:10:47.118 "zcopy": true, 00:10:47.118 "get_zone_info": false, 00:10:47.118 "zone_management": false, 00:10:47.118 "zone_append": false, 00:10:47.118 "compare": false, 00:10:47.118 "compare_and_write": false, 00:10:47.118 "abort": true, 00:10:47.118 "seek_hole": false, 00:10:47.118 "seek_data": false, 00:10:47.118 "copy": true, 00:10:47.118 "nvme_iov_md": false 00:10:47.118 }, 00:10:47.118 "memory_domains": [ 00:10:47.118 { 00:10:47.118 "dma_device_id": "system", 00:10:47.118 "dma_device_type": 1 00:10:47.118 }, 00:10:47.118 { 00:10:47.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.118 "dma_device_type": 2 00:10:47.118 } 00:10:47.118 ], 00:10:47.118 "driver_specific": { 00:10:47.118 "passthru": { 00:10:47.118 "name": "Passthru0", 00:10:47.118 "base_bdev_name": "Malloc0" 00:10:47.118 } 00:10:47.118 } 00:10:47.118 } 00:10:47.118 ]' 00:10:47.118 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:47.118 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:47.118 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:47.118 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.118 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.118 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.118 22:52:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:47.118 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.118 22:52:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 22:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.377 22:52:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:47.377 22:52:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.377 22:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 22:52:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.377 22:52:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:47.377 22:52:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:47.377 ************************************ 00:10:47.377 END TEST rpc_integrity 00:10:47.377 ************************************ 00:10:47.377 22:52:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:47.377 00:10:47.377 real 0m0.356s 00:10:47.377 user 0m0.182s 00:10:47.377 sys 0m0.056s 00:10:47.377 22:52:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.377 22:52:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 22:52:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:47.377 22:52:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.377 22:52:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.377 22:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 ************************************ 00:10:47.377 START TEST rpc_plugins 00:10:47.377 ************************************ 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:47.377 { 00:10:47.377 "name": "Malloc1", 00:10:47.377 "aliases": [ 00:10:47.377 "ae140aad-c40d-4c84-b4d0-88a3a371a482" 00:10:47.377 ], 00:10:47.377 "product_name": "Malloc disk", 00:10:47.377 "block_size": 4096, 00:10:47.377 "num_blocks": 256, 00:10:47.377 "uuid": "ae140aad-c40d-4c84-b4d0-88a3a371a482", 00:10:47.377 "assigned_rate_limits": { 00:10:47.377 "rw_ios_per_sec": 0, 00:10:47.377 "rw_mbytes_per_sec": 0, 00:10:47.377 "r_mbytes_per_sec": 0, 00:10:47.377 "w_mbytes_per_sec": 0 00:10:47.377 }, 00:10:47.377 "claimed": false, 00:10:47.377 "zoned": false, 00:10:47.377 "supported_io_types": { 00:10:47.377 "read": true, 00:10:47.377 "write": true, 00:10:47.377 "unmap": true, 00:10:47.377 "flush": true, 00:10:47.377 "reset": true, 00:10:47.377 "nvme_admin": false, 00:10:47.377 "nvme_io": false, 00:10:47.377 "nvme_io_md": false, 00:10:47.377 "write_zeroes": true, 00:10:47.377 "zcopy": true, 00:10:47.377 "get_zone_info": false, 00:10:47.377 "zone_management": false, 00:10:47.377 "zone_append": false, 00:10:47.377 "compare": false, 00:10:47.377 "compare_and_write": false, 00:10:47.377 "abort": true, 00:10:47.377 "seek_hole": false, 00:10:47.377 "seek_data": false, 00:10:47.377 "copy": true, 00:10:47.377 "nvme_iov_md": false 00:10:47.377 }, 00:10:47.377 "memory_domains": [ 00:10:47.377 { 00:10:47.377 "dma_device_id": "system", 00:10:47.377 "dma_device_type": 1 00:10:47.377 }, 00:10:47.377 { 00:10:47.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.377 "dma_device_type": 2 00:10:47.377 } 00:10:47.377 ], 00:10:47.377 "driver_specific": {} 00:10:47.377 } 00:10:47.377 ]' 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.377 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.377 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:47.637 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.637 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:47.637 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:47.638 ************************************ 00:10:47.638 END TEST rpc_plugins 00:10:47.638 ************************************ 00:10:47.638 22:52:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:47.638 00:10:47.638 real 0m0.161s 00:10:47.638 user 0m0.083s 00:10:47.638 sys 0m0.029s 00:10:47.638 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.638 22:52:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:47.638 22:52:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:47.638 22:52:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.638 22:52:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.638 22:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.638 ************************************ 00:10:47.638 START TEST rpc_trace_cmd_test 00:10:47.638 ************************************ 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:47.638 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57176", 00:10:47.638 "tpoint_group_mask": "0x8", 00:10:47.638 "iscsi_conn": { 00:10:47.638 "mask": "0x2", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "scsi": { 00:10:47.638 "mask": "0x4", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "bdev": { 00:10:47.638 "mask": "0x8", 00:10:47.638 "tpoint_mask": "0xffffffffffffffff" 00:10:47.638 }, 00:10:47.638 "nvmf_rdma": { 00:10:47.638 "mask": "0x10", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "nvmf_tcp": { 00:10:47.638 "mask": "0x20", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "ftl": { 00:10:47.638 "mask": "0x40", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "blobfs": { 00:10:47.638 "mask": "0x80", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "dsa": { 00:10:47.638 "mask": "0x200", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "thread": { 00:10:47.638 "mask": "0x400", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "nvme_pcie": { 00:10:47.638 "mask": "0x800", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "iaa": { 00:10:47.638 "mask": "0x1000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "nvme_tcp": { 00:10:47.638 "mask": "0x2000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "bdev_nvme": { 00:10:47.638 "mask": "0x4000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "sock": { 00:10:47.638 "mask": "0x8000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "blob": { 00:10:47.638 "mask": "0x10000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "bdev_raid": { 00:10:47.638 "mask": "0x20000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 }, 00:10:47.638 "scheduler": { 00:10:47.638 "mask": "0x40000", 00:10:47.638 "tpoint_mask": "0x0" 00:10:47.638 } 00:10:47.638 }' 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:47.638 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:47.898 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:47.898 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:47.898 ************************************ 00:10:47.898 END TEST rpc_trace_cmd_test 00:10:47.898 ************************************ 00:10:47.898 22:52:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:47.898 00:10:47.898 real 0m0.227s 00:10:47.898 user 0m0.179s 00:10:47.898 sys 0m0.037s 00:10:47.898 22:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.898 22:52:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 22:52:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:47.898 22:52:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:47.898 22:52:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:47.898 22:52:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.898 22:52:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.898 22:52:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 ************************************ 00:10:47.898 START TEST rpc_daemon_integrity 00:10:47.898 ************************************ 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.898 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:47.898 { 00:10:47.898 "name": "Malloc2", 00:10:47.898 "aliases": [ 00:10:47.898 "f68a9b14-467d-4544-a1e2-925ec182ee9f" 00:10:47.898 ], 00:10:47.898 "product_name": "Malloc disk", 00:10:47.898 "block_size": 512, 00:10:47.898 "num_blocks": 16384, 00:10:47.898 "uuid": "f68a9b14-467d-4544-a1e2-925ec182ee9f", 00:10:47.898 "assigned_rate_limits": { 00:10:47.898 "rw_ios_per_sec": 0, 00:10:47.898 "rw_mbytes_per_sec": 0, 00:10:47.898 "r_mbytes_per_sec": 0, 00:10:47.898 "w_mbytes_per_sec": 0 00:10:47.898 }, 00:10:47.898 "claimed": false, 00:10:47.898 "zoned": false, 00:10:47.898 "supported_io_types": { 00:10:47.898 "read": true, 00:10:47.898 "write": true, 00:10:47.898 "unmap": true, 00:10:47.898 "flush": true, 00:10:47.898 "reset": true, 00:10:47.898 "nvme_admin": false, 00:10:47.898 "nvme_io": false, 00:10:47.898 "nvme_io_md": false, 00:10:47.898 "write_zeroes": true, 00:10:47.898 "zcopy": true, 00:10:47.899 "get_zone_info": false, 00:10:47.899 "zone_management": false, 00:10:47.899 "zone_append": false, 00:10:47.899 "compare": false, 00:10:47.899 "compare_and_write": false, 00:10:47.899 "abort": true, 00:10:47.899 "seek_hole": false, 00:10:47.899 "seek_data": false, 00:10:47.899 "copy": true, 00:10:47.899 "nvme_iov_md": false 00:10:47.899 }, 00:10:47.899 "memory_domains": [ 00:10:47.899 { 00:10:47.899 "dma_device_id": "system", 00:10:47.899 "dma_device_type": 1 00:10:47.899 }, 00:10:47.899 { 00:10:47.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.899 "dma_device_type": 2 00:10:47.899 } 00:10:47.899 ], 00:10:47.899 "driver_specific": {} 00:10:47.899 } 00:10:47.899 ]' 00:10:47.899 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:48.176 [2024-12-09 22:52:03.794725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:48.176 [2024-12-09 22:52:03.794805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.176 [2024-12-09 22:52:03.794834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:48.176 [2024-12-09 22:52:03.794848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.176 [2024-12-09 22:52:03.797603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.176 [2024-12-09 22:52:03.797718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:48.176 Passthru0 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:48.176 { 00:10:48.176 "name": "Malloc2", 00:10:48.176 "aliases": [ 00:10:48.176 "f68a9b14-467d-4544-a1e2-925ec182ee9f" 00:10:48.176 ], 00:10:48.176 "product_name": "Malloc disk", 00:10:48.176 "block_size": 512, 00:10:48.176 "num_blocks": 16384, 00:10:48.176 "uuid": "f68a9b14-467d-4544-a1e2-925ec182ee9f", 00:10:48.176 "assigned_rate_limits": { 00:10:48.176 "rw_ios_per_sec": 0, 00:10:48.176 "rw_mbytes_per_sec": 0, 00:10:48.176 "r_mbytes_per_sec": 0, 00:10:48.176 "w_mbytes_per_sec": 0 00:10:48.176 }, 00:10:48.176 "claimed": true, 00:10:48.176 "claim_type": "exclusive_write", 00:10:48.176 "zoned": false, 00:10:48.176 "supported_io_types": { 00:10:48.176 "read": true, 00:10:48.176 "write": true, 00:10:48.176 "unmap": true, 00:10:48.176 "flush": true, 00:10:48.176 "reset": true, 00:10:48.176 "nvme_admin": false, 00:10:48.176 "nvme_io": false, 00:10:48.176 "nvme_io_md": false, 00:10:48.176 "write_zeroes": true, 00:10:48.176 "zcopy": true, 00:10:48.176 "get_zone_info": false, 00:10:48.176 "zone_management": false, 00:10:48.176 "zone_append": false, 00:10:48.176 "compare": false, 00:10:48.176 "compare_and_write": false, 00:10:48.176 "abort": true, 00:10:48.176 "seek_hole": false, 00:10:48.176 "seek_data": false, 00:10:48.176 "copy": true, 00:10:48.176 "nvme_iov_md": false 00:10:48.176 }, 00:10:48.176 "memory_domains": [ 00:10:48.176 { 00:10:48.176 "dma_device_id": "system", 00:10:48.176 "dma_device_type": 1 00:10:48.176 }, 00:10:48.176 { 00:10:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.176 "dma_device_type": 2 00:10:48.176 } 00:10:48.176 ], 00:10:48.176 "driver_specific": {} 00:10:48.176 }, 00:10:48.176 { 00:10:48.176 "name": "Passthru0", 00:10:48.176 "aliases": [ 00:10:48.176 "141d1fe0-3772-54dd-82ea-02f34009de74" 00:10:48.176 ], 00:10:48.176 "product_name": "passthru", 00:10:48.176 "block_size": 512, 00:10:48.176 "num_blocks": 16384, 00:10:48.176 "uuid": "141d1fe0-3772-54dd-82ea-02f34009de74", 00:10:48.176 "assigned_rate_limits": { 00:10:48.176 "rw_ios_per_sec": 0, 00:10:48.176 "rw_mbytes_per_sec": 0, 00:10:48.176 "r_mbytes_per_sec": 0, 00:10:48.176 "w_mbytes_per_sec": 0 00:10:48.176 }, 00:10:48.176 "claimed": false, 00:10:48.176 "zoned": false, 00:10:48.176 "supported_io_types": { 00:10:48.176 "read": true, 00:10:48.176 "write": true, 00:10:48.176 "unmap": true, 00:10:48.176 "flush": true, 00:10:48.176 "reset": true, 00:10:48.176 "nvme_admin": false, 00:10:48.176 "nvme_io": false, 00:10:48.176 "nvme_io_md": false, 00:10:48.176 "write_zeroes": true, 00:10:48.176 "zcopy": true, 00:10:48.176 "get_zone_info": false, 00:10:48.176 "zone_management": false, 00:10:48.176 "zone_append": false, 00:10:48.176 "compare": false, 00:10:48.176 "compare_and_write": false, 00:10:48.176 "abort": true, 00:10:48.176 "seek_hole": false, 00:10:48.176 "seek_data": false, 00:10:48.176 "copy": true, 00:10:48.176 "nvme_iov_md": false 00:10:48.176 }, 00:10:48.176 "memory_domains": [ 00:10:48.176 { 00:10:48.176 "dma_device_id": "system", 00:10:48.176 "dma_device_type": 1 00:10:48.176 }, 00:10:48.176 { 00:10:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.176 "dma_device_type": 2 00:10:48.176 } 00:10:48.176 ], 00:10:48.176 "driver_specific": { 00:10:48.176 "passthru": { 00:10:48.176 "name": "Passthru0", 00:10:48.176 "base_bdev_name": "Malloc2" 00:10:48.176 } 00:10:48.176 } 00:10:48.176 } 00:10:48.176 ]' 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:48.176 ************************************ 00:10:48.176 END TEST rpc_daemon_integrity 00:10:48.176 ************************************ 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:48.176 00:10:48.176 real 0m0.362s 00:10:48.176 user 0m0.191s 00:10:48.176 sys 0m0.054s 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.176 22:52:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:48.451 22:52:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:48.451 22:52:04 rpc -- rpc/rpc.sh@84 -- # killprocess 57176 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 57176 ']' 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@958 -- # kill -0 57176 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@959 -- # uname 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57176 00:10:48.451 killing process with pid 57176 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57176' 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@973 -- # kill 57176 00:10:48.451 22:52:04 rpc -- common/autotest_common.sh@978 -- # wait 57176 00:10:50.993 00:10:50.994 real 0m5.440s 00:10:50.994 user 0m5.905s 00:10:50.994 sys 0m0.968s 00:10:50.994 ************************************ 00:10:50.994 END TEST rpc 00:10:50.994 ************************************ 00:10:50.994 22:52:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.994 22:52:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.994 22:52:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:50.994 22:52:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.994 22:52:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.994 22:52:06 -- common/autotest_common.sh@10 -- # set +x 00:10:50.994 ************************************ 00:10:50.994 START TEST skip_rpc 00:10:50.994 ************************************ 00:10:50.994 22:52:06 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:50.994 * Looking for test storage... 00:10:50.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:50.994 22:52:06 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.994 22:52:06 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.994 22:52:06 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.254 22:52:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.254 --rc genhtml_branch_coverage=1 00:10:51.254 --rc genhtml_function_coverage=1 00:10:51.254 --rc genhtml_legend=1 00:10:51.254 --rc geninfo_all_blocks=1 00:10:51.254 --rc geninfo_unexecuted_blocks=1 00:10:51.254 00:10:51.254 ' 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.254 --rc genhtml_branch_coverage=1 00:10:51.254 --rc genhtml_function_coverage=1 00:10:51.254 --rc genhtml_legend=1 00:10:51.254 --rc geninfo_all_blocks=1 00:10:51.254 --rc geninfo_unexecuted_blocks=1 00:10:51.254 00:10:51.254 ' 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.254 --rc genhtml_branch_coverage=1 00:10:51.254 --rc genhtml_function_coverage=1 00:10:51.254 --rc genhtml_legend=1 00:10:51.254 --rc geninfo_all_blocks=1 00:10:51.254 --rc geninfo_unexecuted_blocks=1 00:10:51.254 00:10:51.254 ' 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.254 --rc genhtml_branch_coverage=1 00:10:51.254 --rc genhtml_function_coverage=1 00:10:51.254 --rc genhtml_legend=1 00:10:51.254 --rc geninfo_all_blocks=1 00:10:51.254 --rc geninfo_unexecuted_blocks=1 00:10:51.254 00:10:51.254 ' 00:10:51.254 22:52:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:51.254 22:52:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:51.254 22:52:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.254 22:52:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.254 ************************************ 00:10:51.254 START TEST skip_rpc 00:10:51.254 ************************************ 00:10:51.255 22:52:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:51.255 22:52:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57405 00:10:51.255 22:52:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:51.255 22:52:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:51.255 22:52:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:51.255 [2024-12-09 22:52:06.997694] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:10:51.255 [2024-12-09 22:52:06.997876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57405 ] 00:10:51.514 [2024-12-09 22:52:07.174825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.514 [2024-12-09 22:52:07.294764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57405 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57405 ']' 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57405 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57405 00:10:56.791 killing process with pid 57405 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57405' 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57405 00:10:56.791 22:52:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57405 00:10:58.701 ************************************ 00:10:58.701 END TEST skip_rpc 00:10:58.701 ************************************ 00:10:58.701 00:10:58.701 real 0m7.529s 00:10:58.701 user 0m7.056s 00:10:58.701 sys 0m0.385s 00:10:58.701 22:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.701 22:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.701 22:52:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:58.701 22:52:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:58.701 22:52:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.701 22:52:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.701 ************************************ 00:10:58.701 START TEST skip_rpc_with_json 00:10:58.701 ************************************ 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57520 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57520 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57520 ']' 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.701 22:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:58.961 [2024-12-09 22:52:14.586588] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:10:58.961 [2024-12-09 22:52:14.586708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57520 ] 00:10:58.961 [2024-12-09 22:52:14.761938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.220 [2024-12-09 22:52:14.881856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:00.160 [2024-12-09 22:52:15.797289] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:00.160 request: 00:11:00.160 { 00:11:00.160 "trtype": "tcp", 00:11:00.160 "method": "nvmf_get_transports", 00:11:00.160 "req_id": 1 00:11:00.160 } 00:11:00.160 Got JSON-RPC error response 00:11:00.160 response: 00:11:00.160 { 00:11:00.160 "code": -19, 00:11:00.160 "message": "No such device" 00:11:00.160 } 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:00.160 [2024-12-09 22:52:15.809412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.160 22:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:00.160 { 00:11:00.160 "subsystems": [ 00:11:00.160 { 00:11:00.160 "subsystem": "fsdev", 00:11:00.160 "config": [ 00:11:00.160 { 00:11:00.160 "method": "fsdev_set_opts", 00:11:00.160 "params": { 00:11:00.160 "fsdev_io_pool_size": 65535, 00:11:00.160 "fsdev_io_cache_size": 256 00:11:00.160 } 00:11:00.160 } 00:11:00.160 ] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "keyring", 00:11:00.160 "config": [] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "iobuf", 00:11:00.160 "config": [ 00:11:00.160 { 00:11:00.160 "method": "iobuf_set_options", 00:11:00.160 "params": { 00:11:00.160 "small_pool_count": 8192, 00:11:00.160 "large_pool_count": 1024, 00:11:00.160 "small_bufsize": 8192, 00:11:00.160 "large_bufsize": 135168, 00:11:00.160 "enable_numa": false 00:11:00.160 } 00:11:00.160 } 00:11:00.160 ] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "sock", 00:11:00.160 "config": [ 00:11:00.160 { 00:11:00.160 "method": "sock_set_default_impl", 00:11:00.160 "params": { 00:11:00.160 "impl_name": "posix" 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "sock_impl_set_options", 00:11:00.160 "params": { 00:11:00.160 "impl_name": "ssl", 00:11:00.160 "recv_buf_size": 4096, 00:11:00.160 "send_buf_size": 4096, 00:11:00.160 "enable_recv_pipe": true, 00:11:00.160 "enable_quickack": false, 00:11:00.160 "enable_placement_id": 0, 00:11:00.160 "enable_zerocopy_send_server": true, 00:11:00.160 "enable_zerocopy_send_client": false, 00:11:00.160 "zerocopy_threshold": 0, 00:11:00.160 "tls_version": 0, 00:11:00.160 "enable_ktls": false 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "sock_impl_set_options", 00:11:00.160 "params": { 00:11:00.160 "impl_name": "posix", 00:11:00.160 "recv_buf_size": 2097152, 00:11:00.160 "send_buf_size": 2097152, 00:11:00.160 "enable_recv_pipe": true, 00:11:00.160 "enable_quickack": false, 00:11:00.160 "enable_placement_id": 0, 00:11:00.160 "enable_zerocopy_send_server": true, 00:11:00.160 "enable_zerocopy_send_client": false, 00:11:00.160 "zerocopy_threshold": 0, 00:11:00.160 "tls_version": 0, 00:11:00.160 "enable_ktls": false 00:11:00.160 } 00:11:00.160 } 00:11:00.160 ] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "vmd", 00:11:00.160 "config": [] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "accel", 00:11:00.160 "config": [ 00:11:00.160 { 00:11:00.160 "method": "accel_set_options", 00:11:00.160 "params": { 00:11:00.160 "small_cache_size": 128, 00:11:00.160 "large_cache_size": 16, 00:11:00.160 "task_count": 2048, 00:11:00.160 "sequence_count": 2048, 00:11:00.160 "buf_count": 2048 00:11:00.160 } 00:11:00.160 } 00:11:00.160 ] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "bdev", 00:11:00.160 "config": [ 00:11:00.160 { 00:11:00.160 "method": "bdev_set_options", 00:11:00.160 "params": { 00:11:00.160 "bdev_io_pool_size": 65535, 00:11:00.160 "bdev_io_cache_size": 256, 00:11:00.160 "bdev_auto_examine": true, 00:11:00.160 "iobuf_small_cache_size": 128, 00:11:00.160 "iobuf_large_cache_size": 16 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "bdev_raid_set_options", 00:11:00.160 "params": { 00:11:00.160 "process_window_size_kb": 1024, 00:11:00.160 "process_max_bandwidth_mb_sec": 0 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "bdev_iscsi_set_options", 00:11:00.160 "params": { 00:11:00.160 "timeout_sec": 30 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "bdev_nvme_set_options", 00:11:00.160 "params": { 00:11:00.160 "action_on_timeout": "none", 00:11:00.160 "timeout_us": 0, 00:11:00.160 "timeout_admin_us": 0, 00:11:00.160 "keep_alive_timeout_ms": 10000, 00:11:00.160 "arbitration_burst": 0, 00:11:00.160 "low_priority_weight": 0, 00:11:00.160 "medium_priority_weight": 0, 00:11:00.160 "high_priority_weight": 0, 00:11:00.160 "nvme_adminq_poll_period_us": 10000, 00:11:00.160 "nvme_ioq_poll_period_us": 0, 00:11:00.160 "io_queue_requests": 0, 00:11:00.160 "delay_cmd_submit": true, 00:11:00.160 "transport_retry_count": 4, 00:11:00.160 "bdev_retry_count": 3, 00:11:00.160 "transport_ack_timeout": 0, 00:11:00.160 "ctrlr_loss_timeout_sec": 0, 00:11:00.160 "reconnect_delay_sec": 0, 00:11:00.160 "fast_io_fail_timeout_sec": 0, 00:11:00.160 "disable_auto_failback": false, 00:11:00.160 "generate_uuids": false, 00:11:00.160 "transport_tos": 0, 00:11:00.160 "nvme_error_stat": false, 00:11:00.160 "rdma_srq_size": 0, 00:11:00.160 "io_path_stat": false, 00:11:00.160 "allow_accel_sequence": false, 00:11:00.160 "rdma_max_cq_size": 0, 00:11:00.160 "rdma_cm_event_timeout_ms": 0, 00:11:00.160 "dhchap_digests": [ 00:11:00.160 "sha256", 00:11:00.160 "sha384", 00:11:00.160 "sha512" 00:11:00.160 ], 00:11:00.160 "dhchap_dhgroups": [ 00:11:00.160 "null", 00:11:00.160 "ffdhe2048", 00:11:00.160 "ffdhe3072", 00:11:00.160 "ffdhe4096", 00:11:00.160 "ffdhe6144", 00:11:00.160 "ffdhe8192" 00:11:00.160 ] 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "bdev_nvme_set_hotplug", 00:11:00.160 "params": { 00:11:00.160 "period_us": 100000, 00:11:00.160 "enable": false 00:11:00.160 } 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "method": "bdev_wait_for_examine" 00:11:00.160 } 00:11:00.160 ] 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "scsi", 00:11:00.160 "config": null 00:11:00.160 }, 00:11:00.160 { 00:11:00.160 "subsystem": "scheduler", 00:11:00.160 "config": [ 00:11:00.160 { 00:11:00.160 "method": "framework_set_scheduler", 00:11:00.160 "params": { 00:11:00.161 "name": "static" 00:11:00.161 } 00:11:00.161 } 00:11:00.161 ] 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "subsystem": "vhost_scsi", 00:11:00.161 "config": [] 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "subsystem": "vhost_blk", 00:11:00.161 "config": [] 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "subsystem": "ublk", 00:11:00.161 "config": [] 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "subsystem": "nbd", 00:11:00.161 "config": [] 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "subsystem": "nvmf", 00:11:00.161 "config": [ 00:11:00.161 { 00:11:00.161 "method": "nvmf_set_config", 00:11:00.161 "params": { 00:11:00.161 "discovery_filter": "match_any", 00:11:00.161 "admin_cmd_passthru": { 00:11:00.161 "identify_ctrlr": false 00:11:00.161 }, 00:11:00.161 "dhchap_digests": [ 00:11:00.161 "sha256", 00:11:00.161 "sha384", 00:11:00.161 "sha512" 00:11:00.161 ], 00:11:00.161 "dhchap_dhgroups": [ 00:11:00.161 "null", 00:11:00.161 "ffdhe2048", 00:11:00.161 "ffdhe3072", 00:11:00.161 "ffdhe4096", 00:11:00.161 "ffdhe6144", 00:11:00.161 "ffdhe8192" 00:11:00.161 ] 00:11:00.161 } 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "method": "nvmf_set_max_subsystems", 00:11:00.161 "params": { 00:11:00.161 "max_subsystems": 1024 00:11:00.161 } 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "method": "nvmf_set_crdt", 00:11:00.161 "params": { 00:11:00.161 "crdt1": 0, 00:11:00.161 "crdt2": 0, 00:11:00.161 "crdt3": 0 00:11:00.161 } 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "method": "nvmf_create_transport", 00:11:00.161 "params": { 00:11:00.161 "trtype": "TCP", 00:11:00.161 "max_queue_depth": 128, 00:11:00.161 "max_io_qpairs_per_ctrlr": 127, 00:11:00.161 "in_capsule_data_size": 4096, 00:11:00.161 "max_io_size": 131072, 00:11:00.161 "io_unit_size": 131072, 00:11:00.161 "max_aq_depth": 128, 00:11:00.161 "num_shared_buffers": 511, 00:11:00.161 "buf_cache_size": 4294967295, 00:11:00.161 "dif_insert_or_strip": false, 00:11:00.161 "zcopy": false, 00:11:00.161 "c2h_success": true, 00:11:00.161 "sock_priority": 0, 00:11:00.161 "abort_timeout_sec": 1, 00:11:00.161 "ack_timeout": 0, 00:11:00.161 "data_wr_pool_size": 0 00:11:00.161 } 00:11:00.161 } 00:11:00.161 ] 00:11:00.161 }, 00:11:00.161 { 00:11:00.161 "subsystem": "iscsi", 00:11:00.161 "config": [ 00:11:00.161 { 00:11:00.161 "method": "iscsi_set_options", 00:11:00.161 "params": { 00:11:00.161 "node_base": "iqn.2016-06.io.spdk", 00:11:00.161 "max_sessions": 128, 00:11:00.161 "max_connections_per_session": 2, 00:11:00.161 "max_queue_depth": 64, 00:11:00.161 "default_time2wait": 2, 00:11:00.161 "default_time2retain": 20, 00:11:00.161 "first_burst_length": 8192, 00:11:00.161 "immediate_data": true, 00:11:00.161 "allow_duplicated_isid": false, 00:11:00.161 "error_recovery_level": 0, 00:11:00.161 "nop_timeout": 60, 00:11:00.161 "nop_in_interval": 30, 00:11:00.161 "disable_chap": false, 00:11:00.161 "require_chap": false, 00:11:00.161 "mutual_chap": false, 00:11:00.161 "chap_group": 0, 00:11:00.161 "max_large_datain_per_connection": 64, 00:11:00.161 "max_r2t_per_connection": 4, 00:11:00.161 "pdu_pool_size": 36864, 00:11:00.161 "immediate_data_pool_size": 16384, 00:11:00.161 "data_out_pool_size": 2048 00:11:00.161 } 00:11:00.161 } 00:11:00.161 ] 00:11:00.161 } 00:11:00.161 ] 00:11:00.161 } 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57520 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57520 ']' 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57520 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.161 22:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57520 00:11:00.422 22:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.422 22:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.422 22:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57520' 00:11:00.422 killing process with pid 57520 00:11:00.422 22:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57520 00:11:00.422 22:52:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57520 00:11:02.994 22:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:02.994 22:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57575 00:11:02.994 22:52:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57575 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57575 ']' 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57575 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57575 00:11:08.268 killing process with pid 57575 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57575' 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57575 00:11:08.268 22:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57575 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:10.803 00:11:10.803 real 0m11.760s 00:11:10.803 user 0m11.170s 00:11:10.803 sys 0m0.882s 00:11:10.803 ************************************ 00:11:10.803 END TEST skip_rpc_with_json 00:11:10.803 ************************************ 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:10.803 22:52:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:10.803 22:52:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.803 22:52:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.803 22:52:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.803 ************************************ 00:11:10.803 START TEST skip_rpc_with_delay 00:11:10.803 ************************************ 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:10.803 [2024-12-09 22:52:26.425862] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:10.803 ************************************ 00:11:10.803 END TEST skip_rpc_with_delay 00:11:10.803 ************************************ 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.803 00:11:10.803 real 0m0.170s 00:11:10.803 user 0m0.093s 00:11:10.803 sys 0m0.075s 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.803 22:52:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:10.803 22:52:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:10.803 22:52:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:10.803 22:52:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:10.803 22:52:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.803 22:52:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.803 22:52:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.803 ************************************ 00:11:10.803 START TEST exit_on_failed_rpc_init 00:11:10.803 ************************************ 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57704 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57704 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57704 ']' 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.803 22:52:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:11.063 [2024-12-09 22:52:26.666592] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:11.063 [2024-12-09 22:52:26.666712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57704 ] 00:11:11.063 [2024-12-09 22:52:26.843785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.322 [2024-12-09 22:52:26.964680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:12.260 22:52:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:12.260 [2024-12-09 22:52:27.978527] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:12.260 [2024-12-09 22:52:27.978736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57728 ] 00:11:12.519 [2024-12-09 22:52:28.155307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.519 [2024-12-09 22:52:28.271842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.519 [2024-12-09 22:52:28.272200] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:12.519 [2024-12-09 22:52:28.272275] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:12.519 [2024-12-09 22:52:28.272363] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57704 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57704 ']' 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57704 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57704 00:11:12.779 killing process with pid 57704 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57704' 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57704 00:11:12.779 22:52:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57704 00:11:16.086 00:11:16.086 real 0m4.668s 00:11:16.086 user 0m5.039s 00:11:16.086 sys 0m0.583s 00:11:16.086 ************************************ 00:11:16.086 END TEST exit_on_failed_rpc_init 00:11:16.086 ************************************ 00:11:16.086 22:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.086 22:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:16.086 22:52:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:16.086 00:11:16.086 real 0m24.635s 00:11:16.086 user 0m23.567s 00:11:16.086 sys 0m2.245s 00:11:16.086 22:52:31 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.086 22:52:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.086 ************************************ 00:11:16.086 END TEST skip_rpc 00:11:16.086 ************************************ 00:11:16.086 22:52:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:16.086 22:52:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.086 22:52:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.086 22:52:31 -- common/autotest_common.sh@10 -- # set +x 00:11:16.086 ************************************ 00:11:16.086 START TEST rpc_client 00:11:16.086 ************************************ 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:16.086 * Looking for test storage... 00:11:16.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.086 22:52:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.086 --rc genhtml_branch_coverage=1 00:11:16.086 --rc genhtml_function_coverage=1 00:11:16.086 --rc genhtml_legend=1 00:11:16.086 --rc geninfo_all_blocks=1 00:11:16.086 --rc geninfo_unexecuted_blocks=1 00:11:16.086 00:11:16.086 ' 00:11:16.086 22:52:31 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:16.087 OK 00:11:16.087 22:52:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:16.087 00:11:16.087 real 0m0.304s 00:11:16.087 user 0m0.163s 00:11:16.087 sys 0m0.156s 00:11:16.087 22:52:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.087 22:52:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:16.087 ************************************ 00:11:16.087 END TEST rpc_client 00:11:16.087 ************************************ 00:11:16.087 22:52:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:16.087 22:52:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.087 22:52:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.087 22:52:31 -- common/autotest_common.sh@10 -- # set +x 00:11:16.087 ************************************ 00:11:16.087 START TEST json_config 00:11:16.087 ************************************ 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.087 22:52:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.087 22:52:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.087 22:52:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.087 22:52:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.087 22:52:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.087 22:52:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:16.087 22:52:31 json_config -- scripts/common.sh@345 -- # : 1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.087 22:52:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.087 22:52:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@353 -- # local d=1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.087 22:52:31 json_config -- scripts/common.sh@355 -- # echo 1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.087 22:52:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@353 -- # local d=2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.087 22:52:31 json_config -- scripts/common.sh@355 -- # echo 2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.087 22:52:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.087 22:52:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.087 22:52:31 json_config -- scripts/common.sh@368 -- # return 0 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.087 --rc genhtml_branch_coverage=1 00:11:16.087 --rc genhtml_function_coverage=1 00:11:16.087 --rc genhtml_legend=1 00:11:16.087 --rc geninfo_all_blocks=1 00:11:16.087 --rc geninfo_unexecuted_blocks=1 00:11:16.087 00:11:16.087 ' 00:11:16.087 22:52:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fad45d12-5e8f-4f8f-b4ef-09b6c6113c8d 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fad45d12-5e8f-4f8f-b4ef-09b6c6113c8d 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.087 22:52:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.087 22:52:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.087 22:52:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.087 22:52:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.087 22:52:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 22:52:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 22:52:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 22:52:31 json_config -- paths/export.sh@5 -- # export PATH 00:11:16.087 22:52:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@51 -- # : 0 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.087 22:52:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.347 22:52:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:16.347 WARNING: No tests are enabled so not running JSON configuration tests 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:16.347 22:52:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:16.347 ************************************ 00:11:16.347 END TEST json_config 00:11:16.347 ************************************ 00:11:16.347 00:11:16.347 real 0m0.223s 00:11:16.347 user 0m0.128s 00:11:16.347 sys 0m0.100s 00:11:16.347 22:52:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.347 22:52:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:16.347 22:52:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:16.347 22:52:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.347 22:52:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.347 22:52:32 -- common/autotest_common.sh@10 -- # set +x 00:11:16.347 ************************************ 00:11:16.347 START TEST json_config_extra_key 00:11:16.347 ************************************ 00:11:16.347 22:52:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:16.347 22:52:32 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.347 22:52:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.347 22:52:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.347 22:52:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.347 22:52:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.608 --rc genhtml_branch_coverage=1 00:11:16.608 --rc genhtml_function_coverage=1 00:11:16.608 --rc genhtml_legend=1 00:11:16.608 --rc geninfo_all_blocks=1 00:11:16.608 --rc geninfo_unexecuted_blocks=1 00:11:16.608 00:11:16.608 ' 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.608 --rc genhtml_branch_coverage=1 00:11:16.608 --rc genhtml_function_coverage=1 00:11:16.608 --rc genhtml_legend=1 00:11:16.608 --rc geninfo_all_blocks=1 00:11:16.608 --rc geninfo_unexecuted_blocks=1 00:11:16.608 00:11:16.608 ' 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.608 --rc genhtml_branch_coverage=1 00:11:16.608 --rc genhtml_function_coverage=1 00:11:16.608 --rc genhtml_legend=1 00:11:16.608 --rc geninfo_all_blocks=1 00:11:16.608 --rc geninfo_unexecuted_blocks=1 00:11:16.608 00:11:16.608 ' 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.608 --rc genhtml_branch_coverage=1 00:11:16.608 --rc genhtml_function_coverage=1 00:11:16.608 --rc genhtml_legend=1 00:11:16.608 --rc geninfo_all_blocks=1 00:11:16.608 --rc geninfo_unexecuted_blocks=1 00:11:16.608 00:11:16.608 ' 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fad45d12-5e8f-4f8f-b4ef-09b6c6113c8d 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fad45d12-5e8f-4f8f-b4ef-09b6c6113c8d 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.608 22:52:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.608 22:52:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.608 22:52:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.608 22:52:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.608 22:52:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:16.608 22:52:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.608 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.608 22:52:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:16.608 INFO: launching applications... 00:11:16.608 22:52:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57943 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:16.608 Waiting for target to run... 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57943 /var/tmp/spdk_tgt.sock 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57943 ']' 00:11:16.608 22:52:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.608 22:52:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:16.608 [2024-12-09 22:52:32.386365] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:16.608 [2024-12-09 22:52:32.386663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57943 ] 00:11:17.175 [2024-12-09 22:52:32.974458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.506 [2024-12-09 22:52:33.116279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.449 00:11:18.449 INFO: shutting down applications... 00:11:18.449 22:52:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.449 22:52:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:18.449 22:52:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:18.449 22:52:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:18.449 22:52:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:18.449 22:52:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:18.449 22:52:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:18.450 22:52:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57943 ]] 00:11:18.450 22:52:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57943 00:11:18.450 22:52:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:18.450 22:52:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:18.450 22:52:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:18.450 22:52:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:18.708 22:52:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:18.708 22:52:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:18.708 22:52:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:18.708 22:52:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:19.274 22:52:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:19.274 22:52:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:19.274 22:52:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:19.274 22:52:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:19.840 22:52:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:19.840 22:52:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:19.840 22:52:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:19.841 22:52:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:20.407 22:52:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:20.407 22:52:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:20.407 22:52:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:20.407 22:52:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:20.665 22:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:20.665 22:52:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:20.665 22:52:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:20.665 22:52:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:21.233 22:52:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:21.233 22:52:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:21.233 22:52:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:21.233 22:52:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57943 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:21.802 SPDK target shutdown done 00:11:21.802 22:52:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:21.802 22:52:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:21.802 Success 00:11:21.802 ************************************ 00:11:21.802 END TEST json_config_extra_key 00:11:21.802 ************************************ 00:11:21.802 00:11:21.802 real 0m5.499s 00:11:21.802 user 0m4.705s 00:11:21.802 sys 0m0.871s 00:11:21.802 22:52:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.802 22:52:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:21.802 22:52:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:21.802 22:52:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.802 22:52:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.802 22:52:37 -- common/autotest_common.sh@10 -- # set +x 00:11:21.802 ************************************ 00:11:21.802 START TEST alias_rpc 00:11:21.802 ************************************ 00:11:21.802 22:52:37 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:22.062 * Looking for test storage... 00:11:22.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.062 22:52:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.062 --rc genhtml_branch_coverage=1 00:11:22.062 --rc genhtml_function_coverage=1 00:11:22.062 --rc genhtml_legend=1 00:11:22.062 --rc geninfo_all_blocks=1 00:11:22.062 --rc geninfo_unexecuted_blocks=1 00:11:22.062 00:11:22.062 ' 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.062 --rc genhtml_branch_coverage=1 00:11:22.062 --rc genhtml_function_coverage=1 00:11:22.062 --rc genhtml_legend=1 00:11:22.062 --rc geninfo_all_blocks=1 00:11:22.062 --rc geninfo_unexecuted_blocks=1 00:11:22.062 00:11:22.062 ' 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.062 --rc genhtml_branch_coverage=1 00:11:22.062 --rc genhtml_function_coverage=1 00:11:22.062 --rc genhtml_legend=1 00:11:22.062 --rc geninfo_all_blocks=1 00:11:22.062 --rc geninfo_unexecuted_blocks=1 00:11:22.062 00:11:22.062 ' 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.062 --rc genhtml_branch_coverage=1 00:11:22.062 --rc genhtml_function_coverage=1 00:11:22.062 --rc genhtml_legend=1 00:11:22.062 --rc geninfo_all_blocks=1 00:11:22.062 --rc geninfo_unexecuted_blocks=1 00:11:22.062 00:11:22.062 ' 00:11:22.062 22:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:22.062 22:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58061 00:11:22.062 22:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:22.062 22:52:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58061 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58061 ']' 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.062 22:52:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.321 [2024-12-09 22:52:37.951793] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:22.321 [2024-12-09 22:52:37.952038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58061 ] 00:11:22.321 [2024-12-09 22:52:38.135097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.579 [2024-12-09 22:52:38.286590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:23.953 22:52:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:23.953 22:52:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58061 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58061 ']' 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58061 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58061 00:11:23.953 killing process with pid 58061 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58061' 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 58061 00:11:23.953 22:52:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 58061 00:11:27.239 ************************************ 00:11:27.239 END TEST alias_rpc 00:11:27.239 ************************************ 00:11:27.239 00:11:27.239 real 0m4.894s 00:11:27.239 user 0m4.723s 00:11:27.239 sys 0m0.794s 00:11:27.239 22:52:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.239 22:52:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.239 22:52:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:27.239 22:52:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:27.239 22:52:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.239 22:52:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.239 22:52:42 -- common/autotest_common.sh@10 -- # set +x 00:11:27.239 ************************************ 00:11:27.239 START TEST spdkcli_tcp 00:11:27.239 ************************************ 00:11:27.239 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:27.239 * Looking for test storage... 00:11:27.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:27.239 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.239 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.239 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.239 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.239 22:52:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.240 22:52:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.240 --rc genhtml_branch_coverage=1 00:11:27.240 --rc genhtml_function_coverage=1 00:11:27.240 --rc genhtml_legend=1 00:11:27.240 --rc geninfo_all_blocks=1 00:11:27.240 --rc geninfo_unexecuted_blocks=1 00:11:27.240 00:11:27.240 ' 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.240 --rc genhtml_branch_coverage=1 00:11:27.240 --rc genhtml_function_coverage=1 00:11:27.240 --rc genhtml_legend=1 00:11:27.240 --rc geninfo_all_blocks=1 00:11:27.240 --rc geninfo_unexecuted_blocks=1 00:11:27.240 00:11:27.240 ' 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.240 --rc genhtml_branch_coverage=1 00:11:27.240 --rc genhtml_function_coverage=1 00:11:27.240 --rc genhtml_legend=1 00:11:27.240 --rc geninfo_all_blocks=1 00:11:27.240 --rc geninfo_unexecuted_blocks=1 00:11:27.240 00:11:27.240 ' 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.240 --rc genhtml_branch_coverage=1 00:11:27.240 --rc genhtml_function_coverage=1 00:11:27.240 --rc genhtml_legend=1 00:11:27.240 --rc geninfo_all_blocks=1 00:11:27.240 --rc geninfo_unexecuted_blocks=1 00:11:27.240 00:11:27.240 ' 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58175 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:27.240 22:52:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58175 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58175 ']' 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.240 22:52:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 [2024-12-09 22:52:42.869497] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:27.240 [2024-12-09 22:52:42.869737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58175 ] 00:11:27.240 [2024-12-09 22:52:43.055203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:27.499 [2024-12-09 22:52:43.205100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.499 [2024-12-09 22:52:43.205142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.875 22:52:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.875 22:52:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:11:28.875 22:52:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58202 00:11:28.875 22:52:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:28.875 22:52:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:28.875 [ 00:11:28.875 "bdev_malloc_delete", 00:11:28.875 "bdev_malloc_create", 00:11:28.875 "bdev_null_resize", 00:11:28.875 "bdev_null_delete", 00:11:28.875 "bdev_null_create", 00:11:28.875 "bdev_nvme_cuse_unregister", 00:11:28.875 "bdev_nvme_cuse_register", 00:11:28.875 "bdev_opal_new_user", 00:11:28.875 "bdev_opal_set_lock_state", 00:11:28.875 "bdev_opal_delete", 00:11:28.875 "bdev_opal_get_info", 00:11:28.875 "bdev_opal_create", 00:11:28.875 "bdev_nvme_opal_revert", 00:11:28.875 "bdev_nvme_opal_init", 00:11:28.875 "bdev_nvme_send_cmd", 00:11:28.875 "bdev_nvme_set_keys", 00:11:28.875 "bdev_nvme_get_path_iostat", 00:11:28.875 "bdev_nvme_get_mdns_discovery_info", 00:11:28.875 "bdev_nvme_stop_mdns_discovery", 00:11:28.875 "bdev_nvme_start_mdns_discovery", 00:11:28.875 "bdev_nvme_set_multipath_policy", 00:11:28.875 "bdev_nvme_set_preferred_path", 00:11:28.875 "bdev_nvme_get_io_paths", 00:11:28.875 "bdev_nvme_remove_error_injection", 00:11:28.875 "bdev_nvme_add_error_injection", 00:11:28.875 "bdev_nvme_get_discovery_info", 00:11:28.875 "bdev_nvme_stop_discovery", 00:11:28.875 "bdev_nvme_start_discovery", 00:11:28.875 "bdev_nvme_get_controller_health_info", 00:11:28.875 "bdev_nvme_disable_controller", 00:11:28.875 "bdev_nvme_enable_controller", 00:11:28.875 "bdev_nvme_reset_controller", 00:11:28.875 "bdev_nvme_get_transport_statistics", 00:11:28.875 "bdev_nvme_apply_firmware", 00:11:28.875 "bdev_nvme_detach_controller", 00:11:28.875 "bdev_nvme_get_controllers", 00:11:28.875 "bdev_nvme_attach_controller", 00:11:28.875 "bdev_nvme_set_hotplug", 00:11:28.875 "bdev_nvme_set_options", 00:11:28.875 "bdev_passthru_delete", 00:11:28.875 "bdev_passthru_create", 00:11:28.875 "bdev_lvol_set_parent_bdev", 00:11:28.875 "bdev_lvol_set_parent", 00:11:28.875 "bdev_lvol_check_shallow_copy", 00:11:28.875 "bdev_lvol_start_shallow_copy", 00:11:28.875 "bdev_lvol_grow_lvstore", 00:11:28.875 "bdev_lvol_get_lvols", 00:11:28.875 "bdev_lvol_get_lvstores", 00:11:28.875 "bdev_lvol_delete", 00:11:28.875 "bdev_lvol_set_read_only", 00:11:28.875 "bdev_lvol_resize", 00:11:28.875 "bdev_lvol_decouple_parent", 00:11:28.875 "bdev_lvol_inflate", 00:11:28.875 "bdev_lvol_rename", 00:11:28.875 "bdev_lvol_clone_bdev", 00:11:28.875 "bdev_lvol_clone", 00:11:28.875 "bdev_lvol_snapshot", 00:11:28.875 "bdev_lvol_create", 00:11:28.875 "bdev_lvol_delete_lvstore", 00:11:28.875 "bdev_lvol_rename_lvstore", 00:11:28.875 "bdev_lvol_create_lvstore", 00:11:28.875 "bdev_raid_set_options", 00:11:28.875 "bdev_raid_remove_base_bdev", 00:11:28.875 "bdev_raid_add_base_bdev", 00:11:28.875 "bdev_raid_delete", 00:11:28.875 "bdev_raid_create", 00:11:28.875 "bdev_raid_get_bdevs", 00:11:28.875 "bdev_error_inject_error", 00:11:28.875 "bdev_error_delete", 00:11:28.875 "bdev_error_create", 00:11:28.875 "bdev_split_delete", 00:11:28.875 "bdev_split_create", 00:11:28.875 "bdev_delay_delete", 00:11:28.875 "bdev_delay_create", 00:11:28.875 "bdev_delay_update_latency", 00:11:28.875 "bdev_zone_block_delete", 00:11:28.875 "bdev_zone_block_create", 00:11:28.875 "blobfs_create", 00:11:28.875 "blobfs_detect", 00:11:28.875 "blobfs_set_cache_size", 00:11:28.875 "bdev_aio_delete", 00:11:28.875 "bdev_aio_rescan", 00:11:28.875 "bdev_aio_create", 00:11:28.875 "bdev_ftl_set_property", 00:11:28.875 "bdev_ftl_get_properties", 00:11:28.875 "bdev_ftl_get_stats", 00:11:28.875 "bdev_ftl_unmap", 00:11:28.875 "bdev_ftl_unload", 00:11:28.875 "bdev_ftl_delete", 00:11:28.875 "bdev_ftl_load", 00:11:28.875 "bdev_ftl_create", 00:11:28.875 "bdev_virtio_attach_controller", 00:11:28.875 "bdev_virtio_scsi_get_devices", 00:11:28.875 "bdev_virtio_detach_controller", 00:11:28.875 "bdev_virtio_blk_set_hotplug", 00:11:28.875 "bdev_iscsi_delete", 00:11:28.875 "bdev_iscsi_create", 00:11:28.875 "bdev_iscsi_set_options", 00:11:28.875 "accel_error_inject_error", 00:11:28.875 "ioat_scan_accel_module", 00:11:28.875 "dsa_scan_accel_module", 00:11:28.875 "iaa_scan_accel_module", 00:11:28.875 "keyring_file_remove_key", 00:11:28.875 "keyring_file_add_key", 00:11:28.875 "keyring_linux_set_options", 00:11:28.875 "fsdev_aio_delete", 00:11:28.875 "fsdev_aio_create", 00:11:28.875 "iscsi_get_histogram", 00:11:28.875 "iscsi_enable_histogram", 00:11:28.875 "iscsi_set_options", 00:11:28.875 "iscsi_get_auth_groups", 00:11:28.875 "iscsi_auth_group_remove_secret", 00:11:28.875 "iscsi_auth_group_add_secret", 00:11:28.875 "iscsi_delete_auth_group", 00:11:28.875 "iscsi_create_auth_group", 00:11:28.875 "iscsi_set_discovery_auth", 00:11:28.875 "iscsi_get_options", 00:11:28.875 "iscsi_target_node_request_logout", 00:11:28.875 "iscsi_target_node_set_redirect", 00:11:28.875 "iscsi_target_node_set_auth", 00:11:28.875 "iscsi_target_node_add_lun", 00:11:28.875 "iscsi_get_stats", 00:11:28.875 "iscsi_get_connections", 00:11:28.875 "iscsi_portal_group_set_auth", 00:11:28.875 "iscsi_start_portal_group", 00:11:28.875 "iscsi_delete_portal_group", 00:11:28.875 "iscsi_create_portal_group", 00:11:28.875 "iscsi_get_portal_groups", 00:11:28.875 "iscsi_delete_target_node", 00:11:28.875 "iscsi_target_node_remove_pg_ig_maps", 00:11:28.875 "iscsi_target_node_add_pg_ig_maps", 00:11:28.875 "iscsi_create_target_node", 00:11:28.875 "iscsi_get_target_nodes", 00:11:28.875 "iscsi_delete_initiator_group", 00:11:28.875 "iscsi_initiator_group_remove_initiators", 00:11:28.875 "iscsi_initiator_group_add_initiators", 00:11:28.875 "iscsi_create_initiator_group", 00:11:28.875 "iscsi_get_initiator_groups", 00:11:28.875 "nvmf_set_crdt", 00:11:28.875 "nvmf_set_config", 00:11:28.875 "nvmf_set_max_subsystems", 00:11:28.875 "nvmf_stop_mdns_prr", 00:11:28.875 "nvmf_publish_mdns_prr", 00:11:28.875 "nvmf_subsystem_get_listeners", 00:11:28.875 "nvmf_subsystem_get_qpairs", 00:11:28.875 "nvmf_subsystem_get_controllers", 00:11:28.875 "nvmf_get_stats", 00:11:28.875 "nvmf_get_transports", 00:11:28.875 "nvmf_create_transport", 00:11:28.875 "nvmf_get_targets", 00:11:28.875 "nvmf_delete_target", 00:11:28.875 "nvmf_create_target", 00:11:28.875 "nvmf_subsystem_allow_any_host", 00:11:28.875 "nvmf_subsystem_set_keys", 00:11:28.875 "nvmf_subsystem_remove_host", 00:11:28.875 "nvmf_subsystem_add_host", 00:11:28.875 "nvmf_ns_remove_host", 00:11:28.875 "nvmf_ns_add_host", 00:11:28.875 "nvmf_subsystem_remove_ns", 00:11:28.875 "nvmf_subsystem_set_ns_ana_group", 00:11:28.875 "nvmf_subsystem_add_ns", 00:11:28.875 "nvmf_subsystem_listener_set_ana_state", 00:11:28.875 "nvmf_discovery_get_referrals", 00:11:28.875 "nvmf_discovery_remove_referral", 00:11:28.875 "nvmf_discovery_add_referral", 00:11:28.875 "nvmf_subsystem_remove_listener", 00:11:28.875 "nvmf_subsystem_add_listener", 00:11:28.875 "nvmf_delete_subsystem", 00:11:28.875 "nvmf_create_subsystem", 00:11:28.875 "nvmf_get_subsystems", 00:11:28.875 "env_dpdk_get_mem_stats", 00:11:28.875 "nbd_get_disks", 00:11:28.875 "nbd_stop_disk", 00:11:28.875 "nbd_start_disk", 00:11:28.875 "ublk_recover_disk", 00:11:28.875 "ublk_get_disks", 00:11:28.875 "ublk_stop_disk", 00:11:28.875 "ublk_start_disk", 00:11:28.875 "ublk_destroy_target", 00:11:28.875 "ublk_create_target", 00:11:28.875 "virtio_blk_create_transport", 00:11:28.875 "virtio_blk_get_transports", 00:11:28.875 "vhost_controller_set_coalescing", 00:11:28.875 "vhost_get_controllers", 00:11:28.875 "vhost_delete_controller", 00:11:28.875 "vhost_create_blk_controller", 00:11:28.875 "vhost_scsi_controller_remove_target", 00:11:28.875 "vhost_scsi_controller_add_target", 00:11:28.875 "vhost_start_scsi_controller", 00:11:28.875 "vhost_create_scsi_controller", 00:11:28.875 "thread_set_cpumask", 00:11:28.876 "scheduler_set_options", 00:11:28.876 "framework_get_governor", 00:11:28.876 "framework_get_scheduler", 00:11:28.876 "framework_set_scheduler", 00:11:28.876 "framework_get_reactors", 00:11:28.876 "thread_get_io_channels", 00:11:28.876 "thread_get_pollers", 00:11:28.876 "thread_get_stats", 00:11:28.876 "framework_monitor_context_switch", 00:11:28.876 "spdk_kill_instance", 00:11:28.876 "log_enable_timestamps", 00:11:28.876 "log_get_flags", 00:11:28.876 "log_clear_flag", 00:11:28.876 "log_set_flag", 00:11:28.876 "log_get_level", 00:11:28.876 "log_set_level", 00:11:28.876 "log_get_print_level", 00:11:28.876 "log_set_print_level", 00:11:28.876 "framework_enable_cpumask_locks", 00:11:28.876 "framework_disable_cpumask_locks", 00:11:28.876 "framework_wait_init", 00:11:28.876 "framework_start_init", 00:11:28.876 "scsi_get_devices", 00:11:28.876 "bdev_get_histogram", 00:11:28.876 "bdev_enable_histogram", 00:11:28.876 "bdev_set_qos_limit", 00:11:28.876 "bdev_set_qd_sampling_period", 00:11:28.876 "bdev_get_bdevs", 00:11:28.876 "bdev_reset_iostat", 00:11:28.876 "bdev_get_iostat", 00:11:28.876 "bdev_examine", 00:11:28.876 "bdev_wait_for_examine", 00:11:28.876 "bdev_set_options", 00:11:28.876 "accel_get_stats", 00:11:28.876 "accel_set_options", 00:11:28.876 "accel_set_driver", 00:11:28.876 "accel_crypto_key_destroy", 00:11:28.876 "accel_crypto_keys_get", 00:11:28.876 "accel_crypto_key_create", 00:11:28.876 "accel_assign_opc", 00:11:28.876 "accel_get_module_info", 00:11:28.876 "accel_get_opc_assignments", 00:11:28.876 "vmd_rescan", 00:11:28.876 "vmd_remove_device", 00:11:28.876 "vmd_enable", 00:11:28.876 "sock_get_default_impl", 00:11:28.876 "sock_set_default_impl", 00:11:28.876 "sock_impl_set_options", 00:11:28.876 "sock_impl_get_options", 00:11:28.876 "iobuf_get_stats", 00:11:28.876 "iobuf_set_options", 00:11:28.876 "keyring_get_keys", 00:11:28.876 "framework_get_pci_devices", 00:11:28.876 "framework_get_config", 00:11:28.876 "framework_get_subsystems", 00:11:28.876 "fsdev_set_opts", 00:11:28.876 "fsdev_get_opts", 00:11:28.876 "trace_get_info", 00:11:28.876 "trace_get_tpoint_group_mask", 00:11:28.876 "trace_disable_tpoint_group", 00:11:28.876 "trace_enable_tpoint_group", 00:11:28.876 "trace_clear_tpoint_mask", 00:11:28.876 "trace_set_tpoint_mask", 00:11:28.876 "notify_get_notifications", 00:11:28.876 "notify_get_types", 00:11:28.876 "spdk_get_version", 00:11:28.876 "rpc_get_methods" 00:11:28.876 ] 00:11:28.876 22:52:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.876 22:52:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:28.876 22:52:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58175 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58175 ']' 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58175 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58175 00:11:28.876 killing process with pid 58175 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58175' 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58175 00:11:28.876 22:52:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58175 00:11:32.166 ************************************ 00:11:32.166 END TEST spdkcli_tcp 00:11:32.166 ************************************ 00:11:32.166 00:11:32.166 real 0m4.944s 00:11:32.166 user 0m8.736s 00:11:32.166 sys 0m0.831s 00:11:32.166 22:52:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.166 22:52:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.167 22:52:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:32.167 22:52:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.167 22:52:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.167 22:52:47 -- common/autotest_common.sh@10 -- # set +x 00:11:32.167 ************************************ 00:11:32.167 START TEST dpdk_mem_utility 00:11:32.167 ************************************ 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:32.167 * Looking for test storage... 00:11:32.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:32.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.167 22:52:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.167 --rc genhtml_branch_coverage=1 00:11:32.167 --rc genhtml_function_coverage=1 00:11:32.167 --rc genhtml_legend=1 00:11:32.167 --rc geninfo_all_blocks=1 00:11:32.167 --rc geninfo_unexecuted_blocks=1 00:11:32.167 00:11:32.167 ' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.167 --rc genhtml_branch_coverage=1 00:11:32.167 --rc genhtml_function_coverage=1 00:11:32.167 --rc genhtml_legend=1 00:11:32.167 --rc geninfo_all_blocks=1 00:11:32.167 --rc geninfo_unexecuted_blocks=1 00:11:32.167 00:11:32.167 ' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.167 --rc genhtml_branch_coverage=1 00:11:32.167 --rc genhtml_function_coverage=1 00:11:32.167 --rc genhtml_legend=1 00:11:32.167 --rc geninfo_all_blocks=1 00:11:32.167 --rc geninfo_unexecuted_blocks=1 00:11:32.167 00:11:32.167 ' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.167 --rc genhtml_branch_coverage=1 00:11:32.167 --rc genhtml_function_coverage=1 00:11:32.167 --rc genhtml_legend=1 00:11:32.167 --rc geninfo_all_blocks=1 00:11:32.167 --rc geninfo_unexecuted_blocks=1 00:11:32.167 00:11:32.167 ' 00:11:32.167 22:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:32.167 22:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58307 00:11:32.167 22:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58307 00:11:32.167 22:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58307 ']' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.167 22:52:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:32.167 [2024-12-09 22:52:47.866026] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:32.167 [2024-12-09 22:52:47.866272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58307 ] 00:11:32.429 [2024-12-09 22:52:48.048886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.429 [2024-12-09 22:52:48.201210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.808 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.808 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:33.808 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:33.808 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:33.808 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.808 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:33.808 { 00:11:33.808 "filename": "/tmp/spdk_mem_dump.txt" 00:11:33.808 } 00:11:33.808 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.808 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:33.808 DPDK memory size 824.000000 MiB in 1 heap(s) 00:11:33.808 1 heaps totaling size 824.000000 MiB 00:11:33.808 size: 824.000000 MiB heap id: 0 00:11:33.808 end heaps---------- 00:11:33.808 9 mempools totaling size 603.782043 MiB 00:11:33.808 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:33.808 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:33.808 size: 100.555481 MiB name: bdev_io_58307 00:11:33.808 size: 50.003479 MiB name: msgpool_58307 00:11:33.808 size: 36.509338 MiB name: fsdev_io_58307 00:11:33.808 size: 21.763794 MiB name: PDU_Pool 00:11:33.808 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:33.808 size: 4.133484 MiB name: evtpool_58307 00:11:33.808 size: 0.026123 MiB name: Session_Pool 00:11:33.808 end mempools------- 00:11:33.808 6 memzones totaling size 4.142822 MiB 00:11:33.808 size: 1.000366 MiB name: RG_ring_0_58307 00:11:33.808 size: 1.000366 MiB name: RG_ring_1_58307 00:11:33.808 size: 1.000366 MiB name: RG_ring_4_58307 00:11:33.808 size: 1.000366 MiB name: RG_ring_5_58307 00:11:33.808 size: 0.125366 MiB name: RG_ring_2_58307 00:11:33.808 size: 0.015991 MiB name: RG_ring_3_58307 00:11:33.808 end memzones------- 00:11:33.808 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:33.808 heap id: 0 total size: 824.000000 MiB number of busy elements: 317 number of free elements: 18 00:11:33.808 list of free elements. size: 16.780884 MiB 00:11:33.808 element at address: 0x200006400000 with size: 1.995972 MiB 00:11:33.808 element at address: 0x20000a600000 with size: 1.995972 MiB 00:11:33.808 element at address: 0x200003e00000 with size: 1.991028 MiB 00:11:33.808 element at address: 0x200019500040 with size: 0.999939 MiB 00:11:33.808 element at address: 0x200019900040 with size: 0.999939 MiB 00:11:33.808 element at address: 0x200019a00000 with size: 0.999084 MiB 00:11:33.808 element at address: 0x200032600000 with size: 0.994324 MiB 00:11:33.808 element at address: 0x200000400000 with size: 0.992004 MiB 00:11:33.808 element at address: 0x200019200000 with size: 0.959656 MiB 00:11:33.808 element at address: 0x200019d00040 with size: 0.936401 MiB 00:11:33.808 element at address: 0x200000200000 with size: 0.716980 MiB 00:11:33.808 element at address: 0x20001b400000 with size: 0.562195 MiB 00:11:33.808 element at address: 0x200000c00000 with size: 0.489197 MiB 00:11:33.808 element at address: 0x200019600000 with size: 0.487976 MiB 00:11:33.808 element at address: 0x200019e00000 with size: 0.485413 MiB 00:11:33.808 element at address: 0x200012c00000 with size: 0.433472 MiB 00:11:33.808 element at address: 0x200028800000 with size: 0.390442 MiB 00:11:33.808 element at address: 0x200000800000 with size: 0.350891 MiB 00:11:33.808 list of standard malloc elements. size: 199.288208 MiB 00:11:33.808 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:11:33.808 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:11:33.808 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:33.808 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:11:33.808 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:11:33.808 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:33.808 element at address: 0x200019deff40 with size: 0.062683 MiB 00:11:33.808 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:33.808 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:11:33.808 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:11:33.808 element at address: 0x200012bff040 with size: 0.000305 MiB 00:11:33.808 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:11:33.808 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:11:33.809 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200000cff000 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff180 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff280 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff380 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff480 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff580 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff680 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff780 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff880 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bff980 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200019affc40 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:11:33.809 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:11:33.810 element at address: 0x200028863f40 with size: 0.000244 MiB 00:11:33.810 element at address: 0x200028864040 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886af80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b080 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b180 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b280 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b380 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b480 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b580 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b680 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b780 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b880 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886b980 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886be80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c080 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c180 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c280 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c380 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c480 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c580 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c680 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c780 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c880 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886c980 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d080 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d180 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d280 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d380 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d480 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d580 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d680 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d780 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d880 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886d980 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886da80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886db80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886de80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886df80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e080 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e180 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e280 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e380 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e480 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e580 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e680 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e780 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e880 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886e980 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f080 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f180 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f280 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f380 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f480 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f580 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f680 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f780 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f880 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886f980 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:11:33.810 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:11:33.810 list of memzone associated elements. size: 607.930908 MiB 00:11:33.810 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:11:33.810 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:33.810 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:11:33.810 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:33.810 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:11:33.810 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58307_0 00:11:33.810 element at address: 0x200000dff340 with size: 48.003113 MiB 00:11:33.810 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58307_0 00:11:33.810 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:11:33.810 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58307_0 00:11:33.810 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:11:33.810 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:33.810 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:11:33.810 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:33.810 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:11:33.810 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58307_0 00:11:33.810 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:11:33.810 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58307 00:11:33.810 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:33.810 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58307 00:11:33.810 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:11:33.810 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:33.810 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:11:33.810 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:33.810 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:11:33.810 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:33.810 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:11:33.810 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:33.810 element at address: 0x200000cff100 with size: 1.000549 MiB 00:11:33.810 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58307 00:11:33.810 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:11:33.810 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58307 00:11:33.810 element at address: 0x200019affd40 with size: 1.000549 MiB 00:11:33.810 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58307 00:11:33.811 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:11:33.811 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58307 00:11:33.811 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:11:33.811 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58307 00:11:33.811 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:11:33.811 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58307 00:11:33.811 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:11:33.811 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:33.811 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:11:33.811 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:33.811 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:11:33.811 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:33.811 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:11:33.811 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58307 00:11:33.811 element at address: 0x20000085df80 with size: 0.125549 MiB 00:11:33.811 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58307 00:11:33.811 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:11:33.811 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:33.811 element at address: 0x200028864140 with size: 0.023804 MiB 00:11:33.811 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:33.811 element at address: 0x200000859d40 with size: 0.016174 MiB 00:11:33.811 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58307 00:11:33.811 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:11:33.811 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:33.811 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:11:33.811 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58307 00:11:33.811 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:11:33.811 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58307 00:11:33.811 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:11:33.811 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58307 00:11:33.811 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:11:33.811 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:33.811 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:33.811 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58307 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58307 ']' 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58307 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58307 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.811 killing process with pid 58307 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58307' 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58307 00:11:33.811 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58307 00:11:37.106 ************************************ 00:11:37.106 END TEST dpdk_mem_utility 00:11:37.106 ************************************ 00:11:37.106 00:11:37.106 real 0m4.884s 00:11:37.106 user 0m4.634s 00:11:37.106 sys 0m0.755s 00:11:37.106 22:52:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.106 22:52:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:37.106 22:52:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:37.106 22:52:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.106 22:52:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.106 22:52:52 -- common/autotest_common.sh@10 -- # set +x 00:11:37.106 ************************************ 00:11:37.106 START TEST event 00:11:37.106 ************************************ 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:37.106 * Looking for test storage... 00:11:37.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1711 -- # lcov --version 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:37.106 22:52:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.106 22:52:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.106 22:52:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.106 22:52:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.106 22:52:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.106 22:52:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.106 22:52:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.106 22:52:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.106 22:52:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.106 22:52:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.106 22:52:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.106 22:52:52 event -- scripts/common.sh@344 -- # case "$op" in 00:11:37.106 22:52:52 event -- scripts/common.sh@345 -- # : 1 00:11:37.106 22:52:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.106 22:52:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.106 22:52:52 event -- scripts/common.sh@365 -- # decimal 1 00:11:37.106 22:52:52 event -- scripts/common.sh@353 -- # local d=1 00:11:37.106 22:52:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.106 22:52:52 event -- scripts/common.sh@355 -- # echo 1 00:11:37.106 22:52:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.106 22:52:52 event -- scripts/common.sh@366 -- # decimal 2 00:11:37.106 22:52:52 event -- scripts/common.sh@353 -- # local d=2 00:11:37.106 22:52:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.106 22:52:52 event -- scripts/common.sh@355 -- # echo 2 00:11:37.106 22:52:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.106 22:52:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.106 22:52:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.106 22:52:52 event -- scripts/common.sh@368 -- # return 0 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.106 --rc genhtml_branch_coverage=1 00:11:37.106 --rc genhtml_function_coverage=1 00:11:37.106 --rc genhtml_legend=1 00:11:37.106 --rc geninfo_all_blocks=1 00:11:37.106 --rc geninfo_unexecuted_blocks=1 00:11:37.106 00:11:37.106 ' 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.106 --rc genhtml_branch_coverage=1 00:11:37.106 --rc genhtml_function_coverage=1 00:11:37.106 --rc genhtml_legend=1 00:11:37.106 --rc geninfo_all_blocks=1 00:11:37.106 --rc geninfo_unexecuted_blocks=1 00:11:37.106 00:11:37.106 ' 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.106 --rc genhtml_branch_coverage=1 00:11:37.106 --rc genhtml_function_coverage=1 00:11:37.106 --rc genhtml_legend=1 00:11:37.106 --rc geninfo_all_blocks=1 00:11:37.106 --rc geninfo_unexecuted_blocks=1 00:11:37.106 00:11:37.106 ' 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:37.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.106 --rc genhtml_branch_coverage=1 00:11:37.106 --rc genhtml_function_coverage=1 00:11:37.106 --rc genhtml_legend=1 00:11:37.106 --rc geninfo_all_blocks=1 00:11:37.106 --rc geninfo_unexecuted_blocks=1 00:11:37.106 00:11:37.106 ' 00:11:37.106 22:52:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:37.106 22:52:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:37.106 22:52:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:37.106 22:52:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.106 22:52:52 event -- common/autotest_common.sh@10 -- # set +x 00:11:37.106 ************************************ 00:11:37.106 START TEST event_perf 00:11:37.106 ************************************ 00:11:37.106 22:52:52 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:37.106 Running I/O for 1 seconds...[2024-12-09 22:52:52.798247] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:37.106 [2024-12-09 22:52:52.798481] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58426 ] 00:11:37.366 [2024-12-09 22:52:52.982741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.366 [2024-12-09 22:52:53.139339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.366 Running I/O for 1 seconds...[2024-12-09 22:52:53.139561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.366 [2024-12-09 22:52:53.139779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.366 [2024-12-09 22:52:53.140111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.748 00:11:38.748 lcore 0: 105780 00:11:38.748 lcore 1: 105783 00:11:38.748 lcore 2: 105781 00:11:38.748 lcore 3: 105781 00:11:38.748 done. 00:11:38.748 00:11:38.748 real 0m1.682s 00:11:38.748 user 0m4.414s 00:11:38.748 sys 0m0.140s 00:11:38.748 22:52:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.748 22:52:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:38.748 ************************************ 00:11:38.748 END TEST event_perf 00:11:38.748 ************************************ 00:11:38.748 22:52:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:38.748 22:52:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.748 22:52:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.748 22:52:54 event -- common/autotest_common.sh@10 -- # set +x 00:11:38.748 ************************************ 00:11:38.748 START TEST event_reactor 00:11:38.748 ************************************ 00:11:38.748 22:52:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:38.748 [2024-12-09 22:52:54.542347] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:38.748 [2024-12-09 22:52:54.542569] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58471 ] 00:11:39.007 [2024-12-09 22:52:54.718115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.266 [2024-12-09 22:52:54.867563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.644 test_start 00:11:40.644 oneshot 00:11:40.644 tick 100 00:11:40.644 tick 100 00:11:40.644 tick 250 00:11:40.644 tick 100 00:11:40.644 tick 100 00:11:40.644 tick 100 00:11:40.644 tick 250 00:11:40.644 tick 500 00:11:40.644 tick 100 00:11:40.644 tick 100 00:11:40.644 tick 250 00:11:40.644 tick 100 00:11:40.644 tick 100 00:11:40.644 test_end 00:11:40.644 00:11:40.644 real 0m1.653s 00:11:40.644 user 0m1.440s 00:11:40.644 sys 0m0.103s 00:11:40.644 22:52:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.644 22:52:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:40.644 ************************************ 00:11:40.644 END TEST event_reactor 00:11:40.644 ************************************ 00:11:40.644 22:52:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:40.644 22:52:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:40.644 22:52:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.644 22:52:56 event -- common/autotest_common.sh@10 -- # set +x 00:11:40.644 ************************************ 00:11:40.644 START TEST event_reactor_perf 00:11:40.644 ************************************ 00:11:40.644 22:52:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:40.644 [2024-12-09 22:52:56.262462] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:40.644 [2024-12-09 22:52:56.262688] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58502 ] 00:11:40.644 [2024-12-09 22:52:56.439821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.902 [2024-12-09 22:52:56.594304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.278 test_start 00:11:42.278 test_end 00:11:42.278 Performance: 330854 events per second 00:11:42.278 00:11:42.278 real 0m1.651s 00:11:42.278 user 0m1.430s 00:11:42.278 sys 0m0.110s 00:11:42.278 22:52:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.278 22:52:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 ************************************ 00:11:42.278 END TEST event_reactor_perf 00:11:42.278 ************************************ 00:11:42.278 22:52:57 event -- event/event.sh@49 -- # uname -s 00:11:42.278 22:52:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:42.278 22:52:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:42.278 22:52:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.278 22:52:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.278 22:52:57 event -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 ************************************ 00:11:42.278 START TEST event_scheduler 00:11:42.278 ************************************ 00:11:42.278 22:52:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:42.278 * Looking for test storage... 00:11:42.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:42.278 22:52:58 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.278 22:52:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.278 22:52:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.554 22:52:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.554 22:52:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:42.554 22:52:58 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.554 22:52:58 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.554 --rc genhtml_branch_coverage=1 00:11:42.554 --rc genhtml_function_coverage=1 00:11:42.554 --rc genhtml_legend=1 00:11:42.554 --rc geninfo_all_blocks=1 00:11:42.554 --rc geninfo_unexecuted_blocks=1 00:11:42.554 00:11:42.554 ' 00:11:42.554 22:52:58 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.554 --rc genhtml_branch_coverage=1 00:11:42.554 --rc genhtml_function_coverage=1 00:11:42.554 --rc genhtml_legend=1 00:11:42.554 --rc geninfo_all_blocks=1 00:11:42.554 --rc geninfo_unexecuted_blocks=1 00:11:42.554 00:11:42.554 ' 00:11:42.554 22:52:58 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.554 --rc genhtml_branch_coverage=1 00:11:42.554 --rc genhtml_function_coverage=1 00:11:42.554 --rc genhtml_legend=1 00:11:42.554 --rc geninfo_all_blocks=1 00:11:42.554 --rc geninfo_unexecuted_blocks=1 00:11:42.554 00:11:42.554 ' 00:11:42.554 22:52:58 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.554 --rc genhtml_branch_coverage=1 00:11:42.554 --rc genhtml_function_coverage=1 00:11:42.554 --rc genhtml_legend=1 00:11:42.554 --rc geninfo_all_blocks=1 00:11:42.555 --rc geninfo_unexecuted_blocks=1 00:11:42.555 00:11:42.555 ' 00:11:42.555 22:52:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:42.555 22:52:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58578 00:11:42.555 22:52:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:42.555 22:52:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:42.555 22:52:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58578 00:11:42.555 22:52:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58578 ']' 00:11:42.555 22:52:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.555 22:52:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.555 22:52:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.555 22:52:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.555 22:52:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:42.555 [2024-12-09 22:52:58.264663] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:42.555 [2024-12-09 22:52:58.265282] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58578 ] 00:11:42.841 [2024-12-09 22:52:58.443758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.841 [2024-12-09 22:52:58.571679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.841 [2024-12-09 22:52:58.571565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.841 [2024-12-09 22:52:58.571572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.841 [2024-12-09 22:52:58.571707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:43.409 22:52:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:43.409 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:43.409 POWER: Cannot set governor of lcore 0 to userspace 00:11:43.409 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:43.409 POWER: Cannot set governor of lcore 0 to performance 00:11:43.409 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:43.409 POWER: Cannot set governor of lcore 0 to userspace 00:11:43.409 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:43.409 POWER: Cannot set governor of lcore 0 to userspace 00:11:43.409 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:43.409 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:43.409 POWER: Unable to set Power Management Environment for lcore 0 00:11:43.409 [2024-12-09 22:52:59.146707] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:11:43.409 [2024-12-09 22:52:59.146793] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:11:43.409 [2024-12-09 22:52:59.146868] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:43.409 [2024-12-09 22:52:59.146957] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:43.409 [2024-12-09 22:52:59.147029] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:43.409 [2024-12-09 22:52:59.147109] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.409 22:52:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.409 22:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:43.667 [2024-12-09 22:52:59.508125] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:43.667 22:52:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.667 22:52:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:43.667 22:52:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.667 22:52:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.667 22:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:43.667 ************************************ 00:11:43.667 START TEST scheduler_create_thread 00:11:43.667 ************************************ 00:11:43.667 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:43.667 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:43.667 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.667 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 2 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 3 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 4 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 5 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 6 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 7 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 8 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 9 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 10 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.926 22:52:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:45.308 22:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.308 22:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:45.308 22:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:45.308 22:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.308 22:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 22:53:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.244 22:53:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:46.244 22:53:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.244 22:53:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:46.813 22:53:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.813 22:53:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:46.813 22:53:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:46.813 22:53:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.814 22:53:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:47.752 22:53:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.752 ************************************ 00:11:47.752 END TEST scheduler_create_thread 00:11:47.752 ************************************ 00:11:47.752 00:11:47.752 real 0m3.884s 00:11:47.753 user 0m0.027s 00:11:47.753 sys 0m0.009s 00:11:47.753 22:53:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.753 22:53:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:47.753 22:53:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:47.753 22:53:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58578 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58578 ']' 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58578 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58578 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58578' 00:11:47.753 killing process with pid 58578 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58578 00:11:47.753 22:53:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58578 00:11:48.012 [2024-12-09 22:53:03.782840] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:49.390 00:11:49.390 real 0m7.087s 00:11:49.390 user 0m14.684s 00:11:49.390 sys 0m0.543s 00:11:49.390 22:53:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.390 ************************************ 00:11:49.390 END TEST event_scheduler 00:11:49.390 ************************************ 00:11:49.390 22:53:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:49.390 22:53:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:49.390 22:53:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:49.390 22:53:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:49.390 22:53:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.390 22:53:05 event -- common/autotest_common.sh@10 -- # set +x 00:11:49.390 ************************************ 00:11:49.390 START TEST app_repeat 00:11:49.390 ************************************ 00:11:49.390 22:53:05 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:49.390 22:53:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:49.390 22:53:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.390 22:53:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58708 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58708' 00:11:49.391 Process app_repeat pid: 58708 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:49.391 spdk_app_start Round 0 00:11:49.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58708 /var/tmp/spdk-nbd.sock 00:11:49.391 22:53:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58708 ']' 00:11:49.391 22:53:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:49.391 22:53:05 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:49.391 22:53:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.391 22:53:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:49.391 22:53:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.391 22:53:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:49.391 [2024-12-09 22:53:05.164275] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:11:49.391 [2024-12-09 22:53:05.164428] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58708 ] 00:11:49.650 [2024-12-09 22:53:05.326266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:49.650 [2024-12-09 22:53:05.480559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.650 [2024-12-09 22:53:05.480594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.588 22:53:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.588 22:53:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:50.588 22:53:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:50.588 Malloc0 00:11:50.588 22:53:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:51.157 Malloc1 00:11:51.157 22:53:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.157 22:53:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:51.157 /dev/nbd0 00:11:51.157 22:53:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:51.157 22:53:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:51.157 22:53:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:51.157 22:53:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:51.157 22:53:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.157 22:53:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.157 22:53:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:51.421 1+0 records in 00:11:51.421 1+0 records out 00:11:51.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573414 s, 7.1 MB/s 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.421 22:53:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:51.421 22:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.421 22:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.421 22:53:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:51.421 /dev/nbd1 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:51.699 1+0 records in 00:11:51.699 1+0 records out 00:11:51.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418926 s, 9.8 MB/s 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.699 22:53:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.699 22:53:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:52.051 { 00:11:52.051 "nbd_device": "/dev/nbd0", 00:11:52.051 "bdev_name": "Malloc0" 00:11:52.051 }, 00:11:52.051 { 00:11:52.051 "nbd_device": "/dev/nbd1", 00:11:52.051 "bdev_name": "Malloc1" 00:11:52.051 } 00:11:52.051 ]' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:52.051 { 00:11:52.051 "nbd_device": "/dev/nbd0", 00:11:52.051 "bdev_name": "Malloc0" 00:11:52.051 }, 00:11:52.051 { 00:11:52.051 "nbd_device": "/dev/nbd1", 00:11:52.051 "bdev_name": "Malloc1" 00:11:52.051 } 00:11:52.051 ]' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:52.051 /dev/nbd1' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:52.051 /dev/nbd1' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:52.051 256+0 records in 00:11:52.051 256+0 records out 00:11:52.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126511 s, 82.9 MB/s 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:52.051 256+0 records in 00:11:52.051 256+0 records out 00:11:52.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234505 s, 44.7 MB/s 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:52.051 256+0 records in 00:11:52.051 256+0 records out 00:11:52.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275013 s, 38.1 MB/s 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.051 22:53:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.311 22:53:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.570 22:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:52.829 22:53:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:52.829 22:53:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:53.396 22:53:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:54.775 [2024-12-09 22:53:10.191578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:54.776 [2024-12-09 22:53:10.310027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.776 [2024-12-09 22:53:10.310031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.776 [2024-12-09 22:53:10.520953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:54.776 [2024-12-09 22:53:10.521057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:56.153 spdk_app_start Round 1 00:11:56.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:56.153 22:53:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:56.153 22:53:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:56.153 22:53:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58708 /var/tmp/spdk-nbd.sock 00:11:56.153 22:53:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58708 ']' 00:11:56.153 22:53:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:56.153 22:53:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.153 22:53:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:56.153 22:53:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.153 22:53:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:56.413 22:53:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.413 22:53:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:56.413 22:53:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:56.673 Malloc0 00:11:56.932 22:53:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:57.191 Malloc1 00:11:57.191 22:53:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.191 22:53:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:57.450 /dev/nbd0 00:11:57.450 22:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:57.450 22:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:57.450 1+0 records in 00:11:57.450 1+0 records out 00:11:57.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358735 s, 11.4 MB/s 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:57.450 22:53:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:57.450 22:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.450 22:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.450 22:53:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:57.707 /dev/nbd1 00:11:57.707 22:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:57.707 22:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:57.707 1+0 records in 00:11:57.707 1+0 records out 00:11:57.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463459 s, 8.8 MB/s 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:57.707 22:53:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:57.707 22:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.707 22:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:57.707 22:53:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:57.708 22:53:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.708 22:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:57.966 { 00:11:57.966 "nbd_device": "/dev/nbd0", 00:11:57.966 "bdev_name": "Malloc0" 00:11:57.966 }, 00:11:57.966 { 00:11:57.966 "nbd_device": "/dev/nbd1", 00:11:57.966 "bdev_name": "Malloc1" 00:11:57.966 } 00:11:57.966 ]' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:57.966 { 00:11:57.966 "nbd_device": "/dev/nbd0", 00:11:57.966 "bdev_name": "Malloc0" 00:11:57.966 }, 00:11:57.966 { 00:11:57.966 "nbd_device": "/dev/nbd1", 00:11:57.966 "bdev_name": "Malloc1" 00:11:57.966 } 00:11:57.966 ]' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:57.966 /dev/nbd1' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:57.966 /dev/nbd1' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:57.966 256+0 records in 00:11:57.966 256+0 records out 00:11:57.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126093 s, 83.2 MB/s 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.966 22:53:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:57.966 256+0 records in 00:11:57.966 256+0 records out 00:11:57.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242741 s, 43.2 MB/s 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:57.967 256+0 records in 00:11:57.967 256+0 records out 00:11:57.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289666 s, 36.2 MB/s 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.967 22:53:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:58.225 22:53:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.226 22:53:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.485 22:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:58.744 22:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:58.744 22:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:58.744 22:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:58.744 22:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:58.744 22:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:58.744 22:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:59.032 22:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:59.032 22:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:59.032 22:53:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:59.032 22:53:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:59.032 22:53:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:59.032 22:53:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:59.032 22:53:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:59.302 22:53:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:00.680 [2024-12-09 22:53:16.389383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:00.680 [2024-12-09 22:53:16.507611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.680 [2024-12-09 22:53:16.507641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.939 [2024-12-09 22:53:16.724412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:00.939 [2024-12-09 22:53:16.724502] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:02.317 22:53:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:02.317 spdk_app_start Round 2 00:12:02.317 22:53:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:02.317 22:53:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58708 /var/tmp/spdk-nbd.sock 00:12:02.317 22:53:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58708 ']' 00:12:02.317 22:53:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:02.317 22:53:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:02.317 22:53:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:02.317 22:53:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.317 22:53:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:02.577 22:53:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.577 22:53:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:02.577 22:53:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:02.837 Malloc0 00:12:02.837 22:53:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:03.095 Malloc1 00:12:03.353 22:53:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.353 22:53:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:03.353 /dev/nbd0 00:12:03.353 22:53:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.353 22:53:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.353 22:53:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:03.353 1+0 records in 00:12:03.353 1+0 records out 00:12:03.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373112 s, 11.0 MB/s 00:12:03.612 22:53:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:03.612 22:53:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:03.612 22:53:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:03.612 22:53:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.612 22:53:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:03.612 22:53:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.612 22:53:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.612 22:53:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:03.612 /dev/nbd1 00:12:03.612 22:53:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.871 22:53:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:03.871 1+0 records in 00:12:03.871 1+0 records out 00:12:03.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434163 s, 9.4 MB/s 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.871 22:53:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:03.871 22:53:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.871 22:53:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.871 22:53:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:03.871 22:53:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.871 22:53:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:04.131 { 00:12:04.131 "nbd_device": "/dev/nbd0", 00:12:04.131 "bdev_name": "Malloc0" 00:12:04.131 }, 00:12:04.131 { 00:12:04.131 "nbd_device": "/dev/nbd1", 00:12:04.131 "bdev_name": "Malloc1" 00:12:04.131 } 00:12:04.131 ]' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:04.131 { 00:12:04.131 "nbd_device": "/dev/nbd0", 00:12:04.131 "bdev_name": "Malloc0" 00:12:04.131 }, 00:12:04.131 { 00:12:04.131 "nbd_device": "/dev/nbd1", 00:12:04.131 "bdev_name": "Malloc1" 00:12:04.131 } 00:12:04.131 ]' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:04.131 /dev/nbd1' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:04.131 /dev/nbd1' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:04.131 256+0 records in 00:12:04.131 256+0 records out 00:12:04.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130793 s, 80.2 MB/s 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:04.131 256+0 records in 00:12:04.131 256+0 records out 00:12:04.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250314 s, 41.9 MB/s 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:04.131 256+0 records in 00:12:04.131 256+0 records out 00:12:04.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291677 s, 35.9 MB/s 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.131 22:53:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.390 22:53:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.649 22:53:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:04.908 22:53:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:04.908 22:53:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:05.475 22:53:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:06.850 [2024-12-09 22:53:22.335930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.850 [2024-12-09 22:53:22.457739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.850 [2024-12-09 22:53:22.457743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.850 [2024-12-09 22:53:22.665041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:06.850 [2024-12-09 22:53:22.665124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:08.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:08.782 22:53:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58708 /var/tmp/spdk-nbd.sock 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58708 ']' 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:08.782 22:53:24 event.app_repeat -- event/event.sh@39 -- # killprocess 58708 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58708 ']' 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58708 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58708 00:12:08.782 killing process with pid 58708 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58708' 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58708 00:12:08.782 22:53:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58708 00:12:09.720 spdk_app_start is called in Round 0. 00:12:09.720 Shutdown signal received, stop current app iteration 00:12:09.720 Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 reinitialization... 00:12:09.720 spdk_app_start is called in Round 1. 00:12:09.720 Shutdown signal received, stop current app iteration 00:12:09.720 Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 reinitialization... 00:12:09.720 spdk_app_start is called in Round 2. 00:12:09.720 Shutdown signal received, stop current app iteration 00:12:09.720 Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 reinitialization... 00:12:09.720 spdk_app_start is called in Round 3. 00:12:09.720 Shutdown signal received, stop current app iteration 00:12:09.720 ************************************ 00:12:09.720 END TEST app_repeat 00:12:09.720 ************************************ 00:12:09.720 22:53:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:09.720 22:53:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:09.720 00:12:09.720 real 0m20.426s 00:12:09.720 user 0m44.016s 00:12:09.720 sys 0m3.124s 00:12:09.720 22:53:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.720 22:53:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:09.979 22:53:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:09.979 22:53:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:09.979 22:53:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.979 22:53:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.980 22:53:25 event -- common/autotest_common.sh@10 -- # set +x 00:12:09.980 ************************************ 00:12:09.980 START TEST cpu_locks 00:12:09.980 ************************************ 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:09.980 * Looking for test storage... 00:12:09.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.980 22:53:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.980 --rc genhtml_branch_coverage=1 00:12:09.980 --rc genhtml_function_coverage=1 00:12:09.980 --rc genhtml_legend=1 00:12:09.980 --rc geninfo_all_blocks=1 00:12:09.980 --rc geninfo_unexecuted_blocks=1 00:12:09.980 00:12:09.980 ' 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.980 --rc genhtml_branch_coverage=1 00:12:09.980 --rc genhtml_function_coverage=1 00:12:09.980 --rc genhtml_legend=1 00:12:09.980 --rc geninfo_all_blocks=1 00:12:09.980 --rc geninfo_unexecuted_blocks=1 00:12:09.980 00:12:09.980 ' 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.980 --rc genhtml_branch_coverage=1 00:12:09.980 --rc genhtml_function_coverage=1 00:12:09.980 --rc genhtml_legend=1 00:12:09.980 --rc geninfo_all_blocks=1 00:12:09.980 --rc geninfo_unexecuted_blocks=1 00:12:09.980 00:12:09.980 ' 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.980 --rc genhtml_branch_coverage=1 00:12:09.980 --rc genhtml_function_coverage=1 00:12:09.980 --rc genhtml_legend=1 00:12:09.980 --rc geninfo_all_blocks=1 00:12:09.980 --rc geninfo_unexecuted_blocks=1 00:12:09.980 00:12:09.980 ' 00:12:09.980 22:53:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:09.980 22:53:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:09.980 22:53:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:09.980 22:53:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.980 22:53:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:10.240 ************************************ 00:12:10.240 START TEST default_locks 00:12:10.240 ************************************ 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59166 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59166 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59166 ']' 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.240 22:53:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:10.240 [2024-12-09 22:53:25.938029] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:10.240 [2024-12-09 22:53:25.938237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:12:10.499 [2024-12-09 22:53:26.114322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.499 [2024-12-09 22:53:26.240742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.435 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.435 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:12:11.435 22:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59166 00:12:11.435 22:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59166 00:12:11.435 22:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59166 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59166 ']' 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59166 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59166 00:12:11.694 killing process with pid 59166 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59166' 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59166 00:12:11.694 22:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59166 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59166 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59166 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59166 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59166 ']' 00:12:14.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.229 ERROR: process (pid: 59166) is no longer running 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:14.229 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59166) - No such process 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.229 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:14.230 00:12:14.230 real 0m4.192s 00:12:14.230 user 0m4.170s 00:12:14.230 sys 0m0.590s 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.230 22:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:14.230 ************************************ 00:12:14.230 END TEST default_locks 00:12:14.230 ************************************ 00:12:14.490 22:53:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:14.490 22:53:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.490 22:53:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.490 22:53:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:14.490 ************************************ 00:12:14.490 START TEST default_locks_via_rpc 00:12:14.490 ************************************ 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59241 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59241 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.490 22:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.490 [2024-12-09 22:53:30.204330] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:14.490 [2024-12-09 22:53:30.204494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:12:14.750 [2024-12-09 22:53:30.378170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.750 [2024-12-09 22:53:30.499399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59241 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:15.689 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59241 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59241 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59241 ']' 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59241 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59241 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.948 killing process with pid 59241 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59241' 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59241 00:12:15.948 22:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59241 00:12:18.523 00:12:18.523 real 0m4.139s 00:12:18.523 user 0m4.077s 00:12:18.523 sys 0m0.610s 00:12:18.523 22:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.523 22:53:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.523 ************************************ 00:12:18.523 END TEST default_locks_via_rpc 00:12:18.523 ************************************ 00:12:18.523 22:53:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:18.523 22:53:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.523 22:53:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.523 22:53:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:18.523 ************************************ 00:12:18.523 START TEST non_locking_app_on_locked_coremask 00:12:18.523 ************************************ 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59315 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59315 /var/tmp/spdk.sock 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59315 ']' 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.523 22:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:18.783 [2024-12-09 22:53:34.412583] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:18.783 [2024-12-09 22:53:34.412737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:12:18.783 [2024-12-09 22:53:34.589252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.042 [2024-12-09 22:53:34.711892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59331 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59331 /var/tmp/spdk2.sock 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59331 ']' 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.981 22:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:19.981 [2024-12-09 22:53:35.731254] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:19.981 [2024-12-09 22:53:35.731388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59331 ] 00:12:20.241 [2024-12-09 22:53:35.907336] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:20.241 [2024-12-09 22:53:35.907427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.501 [2024-12-09 22:53:36.146332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.104 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.104 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:23.104 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59315 00:12:23.104 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59315 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59315 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59315 ']' 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59315 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59315 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.105 killing process with pid 59315 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59315' 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59315 00:12:23.105 22:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59315 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59331 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59331 ']' 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59331 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59331 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.381 killing process with pid 59331 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59331' 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59331 00:12:28.381 22:53:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59331 00:12:30.920 00:12:30.920 real 0m12.107s 00:12:30.920 user 0m12.399s 00:12:30.920 sys 0m1.301s 00:12:30.920 22:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.920 22:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:30.920 ************************************ 00:12:30.920 END TEST non_locking_app_on_locked_coremask 00:12:30.920 ************************************ 00:12:30.920 22:53:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:30.920 22:53:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:30.920 22:53:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.920 22:53:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:30.920 ************************************ 00:12:30.920 START TEST locking_app_on_unlocked_coremask 00:12:30.920 ************************************ 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59490 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59490 /var/tmp/spdk.sock 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59490 ']' 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.920 22:53:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:30.920 [2024-12-09 22:53:46.600456] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:30.920 [2024-12-09 22:53:46.600643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:12:31.179 [2024-12-09 22:53:46.782514] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:31.179 [2024-12-09 22:53:46.782605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.179 [2024-12-09 22:53:46.931672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59512 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59512 /var/tmp/spdk2.sock 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59512 ']' 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.557 22:53:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:32.557 [2024-12-09 22:53:48.165959] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:32.557 [2024-12-09 22:53:48.166132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ] 00:12:32.557 [2024-12-09 22:53:48.348449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.816 [2024-12-09 22:53:48.660752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.349 22:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.349 22:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:35.349 22:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59512 00:12:35.349 22:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59512 00:12:35.349 22:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59490 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59490 ']' 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59490 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59490 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.916 killing process with pid 59490 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59490' 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59490 00:12:35.916 22:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59490 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59512 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59512 ']' 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59512 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59512 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.480 killing process with pid 59512 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59512' 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59512 00:12:42.480 22:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59512 00:12:45.012 ************************************ 00:12:45.012 END TEST locking_app_on_unlocked_coremask 00:12:45.012 ************************************ 00:12:45.012 00:12:45.012 real 0m13.877s 00:12:45.012 user 0m13.846s 00:12:45.012 sys 0m1.850s 00:12:45.012 22:54:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.012 22:54:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:45.012 22:54:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:45.012 22:54:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.012 22:54:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.012 22:54:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:45.012 ************************************ 00:12:45.012 START TEST locking_app_on_locked_coremask 00:12:45.012 ************************************ 00:12:45.012 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:45.012 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59682 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59682 /var/tmp/spdk.sock 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59682 ']' 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:45.013 22:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:45.013 [2024-12-09 22:54:00.534655] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:45.013 [2024-12-09 22:54:00.534842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:12:45.013 [2024-12-09 22:54:00.720062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.270 [2024-12-09 22:54:00.891868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.203 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.203 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:46.203 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59698 00:12:46.203 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59698 /var/tmp/spdk2.sock 00:12:46.203 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:46.203 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59698 /var/tmp/spdk2.sock 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59698 /var/tmp/spdk2.sock 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59698 ']' 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.204 22:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:46.462 [2024-12-09 22:54:02.114051] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:46.462 [2024-12-09 22:54:02.114228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:12:46.462 [2024-12-09 22:54:02.296597] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59682 has claimed it. 00:12:46.462 [2024-12-09 22:54:02.296689] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:47.027 ERROR: process (pid: 59698) is no longer running 00:12:47.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59698) - No such process 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59682 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59682 00:12:47.027 22:54:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59682 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59682 ']' 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59682 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59682 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.594 killing process with pid 59682 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59682' 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59682 00:12:47.594 22:54:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59682 00:12:50.885 00:12:50.885 real 0m5.683s 00:12:50.885 user 0m5.717s 00:12:50.885 sys 0m1.048s 00:12:50.885 22:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.885 22:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:50.885 ************************************ 00:12:50.885 END TEST locking_app_on_locked_coremask 00:12:50.885 ************************************ 00:12:50.885 22:54:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:50.885 22:54:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.885 22:54:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.885 22:54:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:50.885 ************************************ 00:12:50.885 START TEST locking_overlapped_coremask 00:12:50.885 ************************************ 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59773 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59773 /var/tmp/spdk.sock 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59773 ']' 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.885 22:54:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:50.885 [2024-12-09 22:54:06.299078] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:50.885 [2024-12-09 22:54:06.299352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59773 ] 00:12:50.885 [2024-12-09 22:54:06.483733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.885 [2024-12-09 22:54:06.642182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.885 [2024-12-09 22:54:06.642249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.885 [2024-12-09 22:54:06.642295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59802 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59802 /var/tmp/spdk2.sock 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59802 /var/tmp/spdk2.sock 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.261 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59802 /var/tmp/spdk2.sock 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59802 ']' 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.262 22:54:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:52.262 [2024-12-09 22:54:07.919443] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:52.262 [2024-12-09 22:54:07.919772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59802 ] 00:12:52.262 [2024-12-09 22:54:08.110619] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59773 has claimed it. 00:12:52.262 [2024-12-09 22:54:08.110706] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:52.828 ERROR: process (pid: 59802) is no longer running 00:12:52.828 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59802) - No such process 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59773 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59773 ']' 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59773 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59773 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59773' 00:12:52.828 killing process with pid 59773 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59773 00:12:52.828 22:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59773 00:12:56.112 00:12:56.112 real 0m5.486s 00:12:56.112 user 0m14.820s 00:12:56.112 sys 0m0.850s 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:56.112 ************************************ 00:12:56.112 END TEST locking_overlapped_coremask 00:12:56.112 ************************************ 00:12:56.112 22:54:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:56.112 22:54:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.112 22:54:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.112 22:54:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:56.112 ************************************ 00:12:56.112 START TEST locking_overlapped_coremask_via_rpc 00:12:56.112 ************************************ 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59872 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59872 /var/tmp/spdk.sock 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59872 ']' 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.112 22:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.112 [2024-12-09 22:54:11.838658] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:56.112 [2024-12-09 22:54:11.838802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:12:56.371 [2024-12-09 22:54:12.024765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:56.371 [2024-12-09 22:54:12.024848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.371 [2024-12-09 22:54:12.189130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.371 [2024-12-09 22:54:12.189281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.371 [2024-12-09 22:54:12.189322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59895 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59895 /var/tmp/spdk2.sock 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59895 ']' 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:57.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.746 22:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.746 [2024-12-09 22:54:13.422309] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:12:57.746 [2024-12-09 22:54:13.422612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59895 ] 00:12:58.005 [2024-12-09 22:54:13.607811] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:58.005 [2024-12-09 22:54:13.607881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.263 [2024-12-09 22:54:13.915229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.263 [2024-12-09 22:54:13.915497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:58.263 [2024-12-09 22:54:13.918491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.793 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.793 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:00.793 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:00.793 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.793 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.794 [2024-12-09 22:54:16.229776] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59872 has claimed it. 00:13:00.794 request: 00:13:00.794 { 00:13:00.794 "method": "framework_enable_cpumask_locks", 00:13:00.794 "req_id": 1 00:13:00.794 } 00:13:00.794 Got JSON-RPC error response 00:13:00.794 response: 00:13:00.794 { 00:13:00.794 "code": -32603, 00:13:00.794 "message": "Failed to claim CPU core: 2" 00:13:00.794 } 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59872 /var/tmp/spdk.sock 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59872 ']' 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59895 /var/tmp/spdk2.sock 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59895 ']' 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:00.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.794 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:01.069 ************************************ 00:13:01.069 END TEST locking_overlapped_coremask_via_rpc 00:13:01.069 ************************************ 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:01.069 00:13:01.069 real 0m5.067s 00:13:01.069 user 0m1.518s 00:13:01.069 sys 0m0.243s 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.069 22:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.069 22:54:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:01.069 22:54:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59872 ]] 00:13:01.069 22:54:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59872 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59872 ']' 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59872 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59872 00:13:01.069 killing process with pid 59872 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59872' 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59872 00:13:01.069 22:54:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59872 00:13:04.360 22:54:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59895 ]] 00:13:04.360 22:54:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59895 00:13:04.360 22:54:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59895 ']' 00:13:04.361 22:54:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59895 00:13:04.361 22:54:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:04.361 22:54:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.361 22:54:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59895 00:13:04.361 killing process with pid 59895 00:13:04.361 22:54:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:04.361 22:54:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:04.361 22:54:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59895' 00:13:04.361 22:54:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59895 00:13:04.361 22:54:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59895 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:07.669 Process with pid 59872 is not found 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59872 ]] 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59872 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59872 ']' 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59872 00:13:07.669 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59872) - No such process 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59872 is not found' 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59895 ]] 00:13:07.669 Process with pid 59895 is not found 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59895 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59895 ']' 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59895 00:13:07.669 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59895) - No such process 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59895 is not found' 00:13:07.669 22:54:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:07.669 00:13:07.669 real 0m57.368s 00:13:07.669 user 1m39.340s 00:13:07.669 sys 0m8.171s 00:13:07.669 ************************************ 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.669 22:54:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:07.669 END TEST cpu_locks 00:13:07.669 ************************************ 00:13:07.669 ************************************ 00:13:07.669 END TEST event 00:13:07.669 ************************************ 00:13:07.669 00:13:07.669 real 1m30.527s 00:13:07.669 user 2m45.620s 00:13:07.669 sys 0m12.571s 00:13:07.669 22:54:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.669 22:54:23 event -- common/autotest_common.sh@10 -- # set +x 00:13:07.669 22:54:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:07.669 22:54:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.669 22:54:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.669 22:54:23 -- common/autotest_common.sh@10 -- # set +x 00:13:07.669 ************************************ 00:13:07.669 START TEST thread 00:13:07.669 ************************************ 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:07.669 * Looking for test storage... 00:13:07.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.669 22:54:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.669 22:54:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.669 22:54:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.669 22:54:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.669 22:54:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.669 22:54:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.669 22:54:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.669 22:54:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.669 22:54:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.669 22:54:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.669 22:54:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.669 22:54:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:07.669 22:54:23 thread -- scripts/common.sh@345 -- # : 1 00:13:07.669 22:54:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.669 22:54:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.669 22:54:23 thread -- scripts/common.sh@365 -- # decimal 1 00:13:07.669 22:54:23 thread -- scripts/common.sh@353 -- # local d=1 00:13:07.669 22:54:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.669 22:54:23 thread -- scripts/common.sh@355 -- # echo 1 00:13:07.669 22:54:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.669 22:54:23 thread -- scripts/common.sh@366 -- # decimal 2 00:13:07.669 22:54:23 thread -- scripts/common.sh@353 -- # local d=2 00:13:07.669 22:54:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.669 22:54:23 thread -- scripts/common.sh@355 -- # echo 2 00:13:07.669 22:54:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.669 22:54:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.669 22:54:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.669 22:54:23 thread -- scripts/common.sh@368 -- # return 0 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.669 --rc genhtml_branch_coverage=1 00:13:07.669 --rc genhtml_function_coverage=1 00:13:07.669 --rc genhtml_legend=1 00:13:07.669 --rc geninfo_all_blocks=1 00:13:07.669 --rc geninfo_unexecuted_blocks=1 00:13:07.669 00:13:07.669 ' 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.669 --rc genhtml_branch_coverage=1 00:13:07.669 --rc genhtml_function_coverage=1 00:13:07.669 --rc genhtml_legend=1 00:13:07.669 --rc geninfo_all_blocks=1 00:13:07.669 --rc geninfo_unexecuted_blocks=1 00:13:07.669 00:13:07.669 ' 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.669 --rc genhtml_branch_coverage=1 00:13:07.669 --rc genhtml_function_coverage=1 00:13:07.669 --rc genhtml_legend=1 00:13:07.669 --rc geninfo_all_blocks=1 00:13:07.669 --rc geninfo_unexecuted_blocks=1 00:13:07.669 00:13:07.669 ' 00:13:07.669 22:54:23 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.669 --rc genhtml_branch_coverage=1 00:13:07.669 --rc genhtml_function_coverage=1 00:13:07.669 --rc genhtml_legend=1 00:13:07.670 --rc geninfo_all_blocks=1 00:13:07.670 --rc geninfo_unexecuted_blocks=1 00:13:07.670 00:13:07.670 ' 00:13:07.670 22:54:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:07.670 22:54:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:07.670 22:54:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.670 22:54:23 thread -- common/autotest_common.sh@10 -- # set +x 00:13:07.670 ************************************ 00:13:07.670 START TEST thread_poller_perf 00:13:07.670 ************************************ 00:13:07.670 22:54:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:07.670 [2024-12-09 22:54:23.361679] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:07.670 [2024-12-09 22:54:23.361824] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60107 ] 00:13:07.928 [2024-12-09 22:54:23.546198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.928 [2024-12-09 22:54:23.690881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.928 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:09.309 [2024-12-09T22:54:25.165Z] ====================================== 00:13:09.309 [2024-12-09T22:54:25.165Z] busy:2301975894 (cyc) 00:13:09.309 [2024-12-09T22:54:25.165Z] total_run_count: 379000 00:13:09.309 [2024-12-09T22:54:25.165Z] tsc_hz: 2290000000 (cyc) 00:13:09.309 [2024-12-09T22:54:25.165Z] ====================================== 00:13:09.309 [2024-12-09T22:54:25.165Z] poller_cost: 6073 (cyc), 2651 (nsec) 00:13:09.309 00:13:09.309 real 0m1.648s 00:13:09.309 user 0m1.419s 00:13:09.309 sys 0m0.119s 00:13:09.309 22:54:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.309 22:54:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:09.309 ************************************ 00:13:09.309 END TEST thread_poller_perf 00:13:09.309 ************************************ 00:13:09.309 22:54:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:09.309 22:54:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:09.309 22:54:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.309 22:54:25 thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.309 ************************************ 00:13:09.309 START TEST thread_poller_perf 00:13:09.309 ************************************ 00:13:09.309 22:54:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:09.309 [2024-12-09 22:54:25.071536] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:09.309 [2024-12-09 22:54:25.071650] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60143 ] 00:13:09.568 [2024-12-09 22:54:25.253664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.568 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:09.568 [2024-12-09 22:54:25.396715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.944 [2024-12-09T22:54:26.800Z] ====================================== 00:13:10.944 [2024-12-09T22:54:26.800Z] busy:2294174906 (cyc) 00:13:10.944 [2024-12-09T22:54:26.800Z] total_run_count: 4600000 00:13:10.945 [2024-12-09T22:54:26.801Z] tsc_hz: 2290000000 (cyc) 00:13:10.945 [2024-12-09T22:54:26.801Z] ====================================== 00:13:10.945 [2024-12-09T22:54:26.801Z] poller_cost: 498 (cyc), 217 (nsec) 00:13:10.945 00:13:10.945 real 0m1.648s 00:13:10.945 user 0m1.410s 00:13:10.945 sys 0m0.130s 00:13:10.945 22:54:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.945 22:54:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:10.945 ************************************ 00:13:10.945 END TEST thread_poller_perf 00:13:10.945 ************************************ 00:13:10.945 22:54:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:10.945 ************************************ 00:13:10.945 END TEST thread 00:13:10.945 ************************************ 00:13:10.945 00:13:10.945 real 0m3.655s 00:13:10.945 user 0m2.993s 00:13:10.945 sys 0m0.460s 00:13:10.945 22:54:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.945 22:54:26 thread -- common/autotest_common.sh@10 -- # set +x 00:13:10.945 22:54:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:13:10.945 22:54:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:10.945 22:54:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.945 22:54:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.945 22:54:26 -- common/autotest_common.sh@10 -- # set +x 00:13:10.945 ************************************ 00:13:10.945 START TEST app_cmdline 00:13:10.945 ************************************ 00:13:10.945 22:54:26 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:11.203 * Looking for test storage... 00:13:11.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:11.203 22:54:26 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.203 22:54:26 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.203 22:54:26 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.203 22:54:26 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.203 22:54:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.203 22:54:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.203 22:54:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.203 22:54:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:13:11.204 22:54:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.204 22:54:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.204 --rc genhtml_branch_coverage=1 00:13:11.204 --rc genhtml_function_coverage=1 00:13:11.204 --rc genhtml_legend=1 00:13:11.204 --rc geninfo_all_blocks=1 00:13:11.204 --rc geninfo_unexecuted_blocks=1 00:13:11.204 00:13:11.204 ' 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.204 --rc genhtml_branch_coverage=1 00:13:11.204 --rc genhtml_function_coverage=1 00:13:11.204 --rc genhtml_legend=1 00:13:11.204 --rc geninfo_all_blocks=1 00:13:11.204 --rc geninfo_unexecuted_blocks=1 00:13:11.204 00:13:11.204 ' 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.204 --rc genhtml_branch_coverage=1 00:13:11.204 --rc genhtml_function_coverage=1 00:13:11.204 --rc genhtml_legend=1 00:13:11.204 --rc geninfo_all_blocks=1 00:13:11.204 --rc geninfo_unexecuted_blocks=1 00:13:11.204 00:13:11.204 ' 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.204 --rc genhtml_branch_coverage=1 00:13:11.204 --rc genhtml_function_coverage=1 00:13:11.204 --rc genhtml_legend=1 00:13:11.204 --rc geninfo_all_blocks=1 00:13:11.204 --rc geninfo_unexecuted_blocks=1 00:13:11.204 00:13:11.204 ' 00:13:11.204 22:54:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:11.204 22:54:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60232 00:13:11.204 22:54:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:11.204 22:54:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60232 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60232 ']' 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.204 22:54:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:11.462 [2024-12-09 22:54:27.134080] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:11.462 [2024-12-09 22:54:27.134330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 00:13:11.721 [2024-12-09 22:54:27.320033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.721 [2024-12-09 22:54:27.462002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:13.097 { 00:13:13.097 "version": "SPDK v25.01-pre git sha1 06358c250", 00:13:13.097 "fields": { 00:13:13.097 "major": 25, 00:13:13.097 "minor": 1, 00:13:13.097 "patch": 0, 00:13:13.097 "suffix": "-pre", 00:13:13.097 "commit": "06358c250" 00:13:13.097 } 00:13:13.097 } 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:13.097 22:54:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:13.097 22:54:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:13.356 request: 00:13:13.356 { 00:13:13.356 "method": "env_dpdk_get_mem_stats", 00:13:13.356 "req_id": 1 00:13:13.356 } 00:13:13.356 Got JSON-RPC error response 00:13:13.356 response: 00:13:13.356 { 00:13:13.356 "code": -32601, 00:13:13.356 "message": "Method not found" 00:13:13.356 } 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.356 22:54:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60232 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60232 ']' 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60232 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60232 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60232' 00:13:13.356 killing process with pid 60232 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 60232 00:13:13.356 22:54:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 60232 00:13:16.747 00:13:16.747 real 0m5.061s 00:13:16.747 user 0m5.137s 00:13:16.747 sys 0m0.823s 00:13:16.747 22:54:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.747 22:54:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 ************************************ 00:13:16.747 END TEST app_cmdline 00:13:16.747 ************************************ 00:13:16.747 22:54:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:16.747 22:54:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.747 22:54:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.747 22:54:31 -- common/autotest_common.sh@10 -- # set +x 00:13:16.747 ************************************ 00:13:16.747 START TEST version 00:13:16.747 ************************************ 00:13:16.747 22:54:31 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:16.747 * Looking for test storage... 00:13:16.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:16.747 22:54:32 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:16.747 22:54:32 version -- common/autotest_common.sh@1711 -- # lcov --version 00:13:16.747 22:54:32 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:16.747 22:54:32 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:16.747 22:54:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.747 22:54:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.747 22:54:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.747 22:54:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.747 22:54:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.747 22:54:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.747 22:54:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.747 22:54:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.747 22:54:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.747 22:54:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.747 22:54:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.747 22:54:32 version -- scripts/common.sh@344 -- # case "$op" in 00:13:16.747 22:54:32 version -- scripts/common.sh@345 -- # : 1 00:13:16.747 22:54:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.747 22:54:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.747 22:54:32 version -- scripts/common.sh@365 -- # decimal 1 00:13:16.747 22:54:32 version -- scripts/common.sh@353 -- # local d=1 00:13:16.747 22:54:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.747 22:54:32 version -- scripts/common.sh@355 -- # echo 1 00:13:16.747 22:54:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.747 22:54:32 version -- scripts/common.sh@366 -- # decimal 2 00:13:16.747 22:54:32 version -- scripts/common.sh@353 -- # local d=2 00:13:16.747 22:54:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.747 22:54:32 version -- scripts/common.sh@355 -- # echo 2 00:13:16.747 22:54:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.747 22:54:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.747 22:54:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.747 22:54:32 version -- scripts/common.sh@368 -- # return 0 00:13:16.747 22:54:32 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.747 22:54:32 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 version -- app/version.sh@17 -- # get_header_version major 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # cut -f2 00:13:16.748 22:54:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # tr -d '"' 00:13:16.748 22:54:32 version -- app/version.sh@17 -- # major=25 00:13:16.748 22:54:32 version -- app/version.sh@18 -- # get_header_version minor 00:13:16.748 22:54:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # cut -f2 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # tr -d '"' 00:13:16.748 22:54:32 version -- app/version.sh@18 -- # minor=1 00:13:16.748 22:54:32 version -- app/version.sh@19 -- # get_header_version patch 00:13:16.748 22:54:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # cut -f2 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # tr -d '"' 00:13:16.748 22:54:32 version -- app/version.sh@19 -- # patch=0 00:13:16.748 22:54:32 version -- app/version.sh@20 -- # get_header_version suffix 00:13:16.748 22:54:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # cut -f2 00:13:16.748 22:54:32 version -- app/version.sh@14 -- # tr -d '"' 00:13:16.748 22:54:32 version -- app/version.sh@20 -- # suffix=-pre 00:13:16.748 22:54:32 version -- app/version.sh@22 -- # version=25.1 00:13:16.748 22:54:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:16.748 22:54:32 version -- app/version.sh@28 -- # version=25.1rc0 00:13:16.748 22:54:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:16.748 22:54:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:16.748 22:54:32 version -- app/version.sh@30 -- # py_version=25.1rc0 00:13:16.748 22:54:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:13:16.748 ************************************ 00:13:16.748 END TEST version 00:13:16.748 ************************************ 00:13:16.748 00:13:16.748 real 0m0.318s 00:13:16.748 user 0m0.188s 00:13:16.748 sys 0m0.186s 00:13:16.748 22:54:32 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.748 22:54:32 version -- common/autotest_common.sh@10 -- # set +x 00:13:16.748 22:54:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:13:16.748 22:54:32 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:13:16.748 22:54:32 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:16.748 22:54:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.748 22:54:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.748 22:54:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.748 ************************************ 00:13:16.748 START TEST bdev_raid 00:13:16.748 ************************************ 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:16.748 * Looking for test storage... 00:13:16.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@345 -- # : 1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.748 22:54:32 bdev_raid -- scripts/common.sh@368 -- # return 0 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:16.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.748 --rc genhtml_branch_coverage=1 00:13:16.748 --rc genhtml_function_coverage=1 00:13:16.748 --rc genhtml_legend=1 00:13:16.748 --rc geninfo_all_blocks=1 00:13:16.748 --rc geninfo_unexecuted_blocks=1 00:13:16.748 00:13:16.748 ' 00:13:16.748 22:54:32 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:16.748 22:54:32 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:16.748 22:54:32 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:13:16.748 22:54:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:13:16.748 22:54:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:13:16.748 22:54:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:13:16.748 22:54:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.748 22:54:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.748 ************************************ 00:13:16.748 START TEST raid1_resize_data_offset_test 00:13:16.748 ************************************ 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60431 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60431' 00:13:16.748 Process raid pid: 60431 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60431 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60431 ']' 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.748 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.749 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.749 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.749 22:54:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.007 [2024-12-09 22:54:32.629752] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:17.007 [2024-12-09 22:54:32.630046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.007 [2024-12-09 22:54:32.816813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.267 [2024-12-09 22:54:32.964405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.527 [2024-12-09 22:54:33.219906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.527 [2024-12-09 22:54:33.220084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.787 malloc0 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.787 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.047 malloc1 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.047 null0 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.047 [2024-12-09 22:54:33.712757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:13:18.047 [2024-12-09 22:54:33.715288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:18.047 [2024-12-09 22:54:33.715416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:13:18.047 [2024-12-09 22:54:33.715698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:18.047 [2024-12-09 22:54:33.715762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:13:18.047 [2024-12-09 22:54:33.716197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:18.047 [2024-12-09 22:54:33.716546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:18.047 [2024-12-09 22:54:33.716606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:18.047 [2024-12-09 22:54:33.716927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.047 [2024-12-09 22:54:33.772881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.047 22:54:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 malloc2 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.618 [2024-12-09 22:54:34.438352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:18.618 [2024-12-09 22:54:34.458739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.618 [2024-12-09 22:54:34.461113] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:13:18.618 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60431 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60431 ']' 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60431 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60431 00:13:18.877 killing process with pid 60431 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60431' 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60431 00:13:18.877 22:54:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60431 00:13:18.877 [2024-12-09 22:54:34.555232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.877 [2024-12-09 22:54:34.557103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:13:18.877 [2024-12-09 22:54:34.557179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.877 [2024-12-09 22:54:34.557203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:13:18.877 [2024-12-09 22:54:34.598953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.877 [2024-12-09 22:54:34.599348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.877 [2024-12-09 22:54:34.599368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:20.781 [2024-12-09 22:54:36.631428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.184 ************************************ 00:13:22.184 END TEST raid1_resize_data_offset_test 00:13:22.184 ************************************ 00:13:22.184 22:54:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:13:22.184 00:13:22.184 real 0m5.403s 00:13:22.184 user 0m5.149s 00:13:22.184 sys 0m0.748s 00:13:22.184 22:54:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.184 22:54:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.184 22:54:37 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:13:22.184 22:54:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.184 22:54:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.184 22:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.184 ************************************ 00:13:22.184 START TEST raid0_resize_superblock_test 00:13:22.184 ************************************ 00:13:22.184 22:54:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:13:22.184 22:54:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:13:22.184 Process raid pid: 60520 00:13:22.184 22:54:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60520 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60520' 00:13:22.184 22:54:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60520 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60520 ']' 00:13:22.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.184 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.443 [2024-12-09 22:54:38.098617] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:22.443 [2024-12-09 22:54:38.098876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.443 [2024-12-09 22:54:38.281071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.702 [2024-12-09 22:54:38.425003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.962 [2024-12-09 22:54:38.669546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.962 [2024-12-09 22:54:38.669738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.223 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.223 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:23.223 22:54:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:13:23.223 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.223 22:54:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.792 malloc0 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.792 [2024-12-09 22:54:39.615382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:23.792 [2024-12-09 22:54:39.615470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.792 [2024-12-09 22:54:39.615500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.792 [2024-12-09 22:54:39.615514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.792 [2024-12-09 22:54:39.618135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.792 [2024-12-09 22:54:39.618246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:23.792 pt0 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.792 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 91fe2229-c01a-4ac6-a656-562470cc472b 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 28e08181-d4e5-445e-a176-cce7a14faae4 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 b02cbe8e-5f20-4472-8f65-7d930bf4a368 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 [2024-12-09 22:54:39.842137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28e08181-d4e5-445e-a176-cce7a14faae4 is claimed 00:13:24.051 [2024-12-09 22:54:39.842354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b02cbe8e-5f20-4472-8f65-7d930bf4a368 is claimed 00:13:24.051 [2024-12-09 22:54:39.842583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:24.051 [2024-12-09 22:54:39.842641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:13:24.051 [2024-12-09 22:54:39.843025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:24.051 [2024-12-09 22:54:39.843308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:24.051 [2024-12-09 22:54:39.843356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:24.051 [2024-12-09 22:54:39.843630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 [2024-12-09 22:54:39.958253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.310 22:54:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 [2024-12-09 22:54:40.006144] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:24.310 [2024-12-09 22:54:40.006180] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '28e08181-d4e5-445e-a176-cce7a14faae4' was resized: old size 131072, new size 204800 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 [2024-12-09 22:54:40.018012] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:24.310 [2024-12-09 22:54:40.018043] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b02cbe8e-5f20-4472-8f65-7d930bf4a368' was resized: old size 131072, new size 204800 00:13:24.310 [2024-12-09 22:54:40.018078] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.310 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.310 [2024-12-09 22:54:40.133971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.311 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.570 [2024-12-09 22:54:40.181632] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:24.570 [2024-12-09 22:54:40.181726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:24.570 [2024-12-09 22:54:40.181745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.570 [2024-12-09 22:54:40.181759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:24.570 [2024-12-09 22:54:40.181916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.570 [2024-12-09 22:54:40.181959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.570 [2024-12-09 22:54:40.181974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.570 [2024-12-09 22:54:40.193471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:24.570 [2024-12-09 22:54:40.193612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.570 [2024-12-09 22:54:40.193646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:24.570 [2024-12-09 22:54:40.193671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.570 [2024-12-09 22:54:40.196551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.570 [2024-12-09 22:54:40.196592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:24.570 pt0 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.570 [2024-12-09 22:54:40.198590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 28e08181-d4e5-445e-a176-cce7a14faae4 00:13:24.570 [2024-12-09 22:54:40.198671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28e08181-d4e5-445e-a176-cce7a14faae4 is claimed 00:13:24.570 [2024-12-09 22:54:40.198782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b02cbe8e-5f20-4472-8f65-7d930bf4a368 00:13:24.570 [2024-12-09 22:54:40.198802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b02cbe8e-5f20-4472-8f65-7d930bf4a368 is claimed 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:24.570 [2024-12-09 22:54:40.198951] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b02cbe8e-5f20-4472-8f65-7d930bf4a368 (2) smaller than existing raid bdev Raid (3) 00:13:24.570 [2024-12-09 22:54:40.198977] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 28e08181-d4e5-445e-a176-cce7a14faae4: File exists 00:13:24.570 [2024-12-09 22:54:40.199017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:24.570 [2024-12-09 22:54:40.199031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.570 [2024-12-09 22:54:40.199310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.570 [2024-12-09 22:54:40.199490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:24.570 [2024-12-09 22:54:40.199500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:24.570 [2024-12-09 22:54:40.199663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.570 [2024-12-09 22:54:40.221755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:24.570 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60520 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60520 ']' 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60520 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60520 00:13:24.571 killing process with pid 60520 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60520' 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60520 00:13:24.571 [2024-12-09 22:54:40.303801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.571 [2024-12-09 22:54:40.303918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.571 22:54:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60520 00:13:24.571 [2024-12-09 22:54:40.303982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.571 [2024-12-09 22:54:40.303992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:26.548 [2024-12-09 22:54:41.930465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.484 22:54:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:13:27.484 00:13:27.484 real 0m5.203s 00:13:27.484 user 0m5.270s 00:13:27.484 sys 0m0.772s 00:13:27.484 22:54:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.484 22:54:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.484 ************************************ 00:13:27.484 END TEST raid0_resize_superblock_test 00:13:27.484 ************************************ 00:13:27.484 22:54:43 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:13:27.484 22:54:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.484 22:54:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.484 22:54:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.484 ************************************ 00:13:27.484 START TEST raid1_resize_superblock_test 00:13:27.484 ************************************ 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60624 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:27.484 Process raid pid: 60624 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60624' 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60624 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60624 ']' 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.484 22:54:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.743 [2024-12-09 22:54:43.368691] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:27.743 [2024-12-09 22:54:43.368933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.743 [2024-12-09 22:54:43.549004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.002 [2024-12-09 22:54:43.699560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.261 [2024-12-09 22:54:43.941983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.261 [2024-12-09 22:54:43.942186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.519 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.519 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:28.519 22:54:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:13:28.519 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.519 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.118 malloc0 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.118 [2024-12-09 22:54:44.871360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:29.118 [2024-12-09 22:54:44.871433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.118 [2024-12-09 22:54:44.871475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.118 [2024-12-09 22:54:44.871490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.118 [2024-12-09 22:54:44.874186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.118 [2024-12-09 22:54:44.874226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:29.118 pt0 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.118 22:54:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 41be01be-57fa-4ce0-b6a5-d2a3fe422ebf 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 f7dc028a-5989-4e75-824c-c71f4e43cec1 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 70925d82-0cce-474f-bfa2-14f5c270ee94 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 [2024-12-09 22:54:45.081553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f7dc028a-5989-4e75-824c-c71f4e43cec1 is claimed 00:13:29.379 [2024-12-09 22:54:45.081736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 70925d82-0cce-474f-bfa2-14f5c270ee94 is claimed 00:13:29.379 [2024-12-09 22:54:45.081888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:29.379 [2024-12-09 22:54:45.081908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:29.379 [2024-12-09 22:54:45.082233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:29.379 [2024-12-09 22:54:45.082442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:29.379 [2024-12-09 22:54:45.082453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:29.379 [2024-12-09 22:54:45.082646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.379 [2024-12-09 22:54:45.197676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.379 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.638 [2024-12-09 22:54:45.265631] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:29.638 [2024-12-09 22:54:45.265772] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f7dc028a-5989-4e75-824c-c71f4e43cec1' was resized: old size 131072, new size 204800 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.638 [2024-12-09 22:54:45.277491] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:29.638 [2024-12-09 22:54:45.277526] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '70925d82-0cce-474f-bfa2-14f5c270ee94' was resized: old size 131072, new size 204800 00:13:29.638 [2024-12-09 22:54:45.277562] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.638 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:29.639 [2024-12-09 22:54:45.393332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.639 [2024-12-09 22:54:45.441018] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:29.639 [2024-12-09 22:54:45.441114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:29.639 [2024-12-09 22:54:45.441147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:29.639 [2024-12-09 22:54:45.441353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.639 [2024-12-09 22:54:45.441636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.639 [2024-12-09 22:54:45.441722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.639 [2024-12-09 22:54:45.441739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.639 [2024-12-09 22:54:45.448840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:29.639 [2024-12-09 22:54:45.448899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.639 [2024-12-09 22:54:45.448924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:29.639 [2024-12-09 22:54:45.448940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.639 [2024-12-09 22:54:45.451784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.639 [2024-12-09 22:54:45.451863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:29.639 pt0 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.639 [2024-12-09 22:54:45.453960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f7dc028a-5989-4e75-824c-c71f4e43cec1 00:13:29.639 [2024-12-09 22:54:45.454057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f7dc028a-5989-4e75-824c-c71f4e43cec1 is claimed 00:13:29.639 [2024-12-09 22:54:45.454187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 70925d82-0cce-474f-bfa2-14f5c270ee94 00:13:29.639 [2024-12-09 22:54:45.454209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 70925d82-0cce-474f-bfa2-14f5c270ee94 is claimed 00:13:29.639 [2024-12-09 22:54:45.454338] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 70925d82-0cce-474f-bfa2-14f5c270ee94 (2) smaller than existing raid bdev Raid (3) 00:13:29.639 [2024-12-09 22:54:45.454363] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f7dc028a-5989-4e75-824c-c71f4e43cec1: File exists 00:13:29.639 [2024-12-09 22:54:45.454409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:29.639 [2024-12-09 22:54:45.454424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:29.639 [2024-12-09 22:54:45.454725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:29.639 [2024-12-09 22:54:45.454915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:29.639 [2024-12-09 22:54:45.454925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:29.639 [2024-12-09 22:54:45.455116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:13:29.639 [2024-12-09 22:54:45.469155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.639 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.898 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:29.898 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:29.898 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:13:29.898 22:54:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60624 00:13:29.898 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60624 ']' 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60624 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60624 00:13:29.899 killing process with pid 60624 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60624' 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60624 00:13:29.899 [2024-12-09 22:54:45.546854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.899 [2024-12-09 22:54:45.546980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.899 22:54:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60624 00:13:29.899 [2024-12-09 22:54:45.547054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.899 [2024-12-09 22:54:45.547065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:31.804 [2024-12-09 22:54:47.357689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.200 ************************************ 00:13:33.200 END TEST raid1_resize_superblock_test 00:13:33.200 ************************************ 00:13:33.200 22:54:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:13:33.200 00:13:33.200 real 0m5.538s 00:13:33.200 user 0m5.573s 00:13:33.200 sys 0m0.780s 00:13:33.200 22:54:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.200 22:54:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.200 22:54:48 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:13:33.200 22:54:48 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:13:33.200 22:54:48 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:13:33.200 22:54:48 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:13:33.200 22:54:48 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:13:33.200 22:54:48 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:33.200 22:54:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.200 22:54:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.200 22:54:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.200 ************************************ 00:13:33.200 START TEST raid_function_test_raid0 00:13:33.200 ************************************ 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60738 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60738' 00:13:33.200 Process raid pid: 60738 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60738 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60738 ']' 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.200 22:54:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:33.200 [2024-12-09 22:54:49.000512] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:33.200 [2024-12-09 22:54:49.000779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.458 [2024-12-09 22:54:49.186537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.722 [2024-12-09 22:54:49.336246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.981 [2024-12-09 22:54:49.603396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.981 [2024-12-09 22:54:49.603558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:34.239 Base_1 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:34.239 Base_2 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.239 22:54:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:34.239 [2024-12-09 22:54:50.000671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:34.239 [2024-12-09 22:54:50.003047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:34.239 [2024-12-09 22:54:50.003195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.239 [2024-12-09 22:54:50.003214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:34.239 [2024-12-09 22:54:50.003530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:34.239 [2024-12-09 22:54:50.003717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.239 [2024-12-09 22:54:50.003728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:13:34.239 [2024-12-09 22:54:50.003932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.239 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.240 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:13:34.498 [2024-12-09 22:54:50.320748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:34.498 /dev/nbd0 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.756 1+0 records in 00:13:34.756 1+0 records out 00:13:34.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341263 s, 12.0 MB/s 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.756 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:35.014 { 00:13:35.014 "nbd_device": "/dev/nbd0", 00:13:35.014 "bdev_name": "raid" 00:13:35.014 } 00:13:35.014 ]' 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:35.014 { 00:13:35.014 "nbd_device": "/dev/nbd0", 00:13:35.014 "bdev_name": "raid" 00:13:35.014 } 00:13:35.014 ]' 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:35.014 4096+0 records in 00:13:35.014 4096+0 records out 00:13:35.014 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0367239 s, 57.1 MB/s 00:13:35.014 22:54:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:35.272 4096+0 records in 00:13:35.272 4096+0 records out 00:13:35.272 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237756 s, 8.8 MB/s 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:35.272 128+0 records in 00:13:35.272 128+0 records out 00:13:35.272 65536 bytes (66 kB, 64 KiB) copied, 0.00137775 s, 47.6 MB/s 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:35.272 2035+0 records in 00:13:35.272 2035+0 records out 00:13:35.272 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0160933 s, 64.7 MB/s 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:35.272 456+0 records in 00:13:35.272 456+0 records out 00:13:35.272 233472 bytes (233 kB, 228 KiB) copied, 0.00393277 s, 59.4 MB/s 00:13:35.272 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.531 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:35.790 [2024-12-09 22:54:51.394951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.790 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60738 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60738 ']' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60738 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60738 00:13:36.049 killing process with pid 60738 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60738' 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60738 00:13:36.049 [2024-12-09 22:54:51.719179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.049 22:54:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60738 00:13:36.049 [2024-12-09 22:54:51.719313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.049 [2024-12-09 22:54:51.719376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.049 [2024-12-09 22:54:51.719401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:13:36.319 [2024-12-09 22:54:51.964414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.695 22:54:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:13:37.695 00:13:37.695 real 0m4.371s 00:13:37.695 user 0m4.968s 00:13:37.695 sys 0m1.192s 00:13:37.695 22:54:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.695 22:54:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:37.695 ************************************ 00:13:37.695 END TEST raid_function_test_raid0 00:13:37.695 ************************************ 00:13:37.695 22:54:53 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:13:37.695 22:54:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:37.695 22:54:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.695 22:54:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.695 ************************************ 00:13:37.695 START TEST raid_function_test_concat 00:13:37.695 ************************************ 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60867 00:13:37.695 Process raid pid: 60867 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60867' 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60867 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60867 ']' 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.695 22:54:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:37.695 [2024-12-09 22:54:53.430658] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:37.695 [2024-12-09 22:54:53.430807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.954 [2024-12-09 22:54:53.609575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.954 [2024-12-09 22:54:53.762366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.212 [2024-12-09 22:54:54.025777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.212 [2024-12-09 22:54:54.025861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:38.780 Base_1 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:38.780 Base_2 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:38.780 [2024-12-09 22:54:54.450326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:38.780 [2024-12-09 22:54:54.452449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:38.780 [2024-12-09 22:54:54.452570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:38.780 [2024-12-09 22:54:54.452583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:38.780 [2024-12-09 22:54:54.452897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:38.780 [2024-12-09 22:54:54.453089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:38.780 [2024-12-09 22:54:54.453105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:13:38.780 [2024-12-09 22:54:54.453293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.780 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:13:39.039 [2024-12-09 22:54:54.682017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:39.039 /dev/nbd0 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.039 1+0 records in 00:13:39.039 1+0 records out 00:13:39.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383287 s, 10.7 MB/s 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.039 22:54:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:39.298 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:39.298 { 00:13:39.298 "nbd_device": "/dev/nbd0", 00:13:39.298 "bdev_name": "raid" 00:13:39.298 } 00:13:39.298 ]' 00:13:39.298 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.298 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:39.298 { 00:13:39.298 "nbd_device": "/dev/nbd0", 00:13:39.298 "bdev_name": "raid" 00:13:39.298 } 00:13:39.298 ]' 00:13:39.298 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:39.298 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:39.299 4096+0 records in 00:13:39.299 4096+0 records out 00:13:39.299 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341135 s, 61.5 MB/s 00:13:39.299 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:39.557 4096+0 records in 00:13:39.557 4096+0 records out 00:13:39.557 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.24611 s, 8.5 MB/s 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:39.557 128+0 records in 00:13:39.557 128+0 records out 00:13:39.557 65536 bytes (66 kB, 64 KiB) copied, 0.00120166 s, 54.5 MB/s 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:39.557 2035+0 records in 00:13:39.557 2035+0 records out 00:13:39.557 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130068 s, 80.1 MB/s 00:13:39.557 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:39.815 456+0 records in 00:13:39.815 456+0 records out 00:13:39.815 233472 bytes (233 kB, 228 KiB) copied, 0.00408506 s, 57.2 MB/s 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:39.815 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.816 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.074 [2024-12-09 22:54:55.702417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.074 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:40.334 22:54:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60867 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60867 ']' 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60867 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60867 00:13:40.334 killing process with pid 60867 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60867' 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60867 00:13:40.334 22:54:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60867 00:13:40.334 [2024-12-09 22:54:56.048151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.334 [2024-12-09 22:54:56.048288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.334 [2024-12-09 22:54:56.048368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.334 [2024-12-09 22:54:56.048388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:13:40.593 [2024-12-09 22:54:56.300693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.042 ************************************ 00:13:42.042 END TEST raid_function_test_concat 00:13:42.042 ************************************ 00:13:42.042 22:54:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:13:42.042 00:13:42.042 real 0m4.280s 00:13:42.042 user 0m4.821s 00:13:42.042 sys 0m1.136s 00:13:42.042 22:54:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.042 22:54:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:42.042 22:54:57 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:13:42.042 22:54:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.042 22:54:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.042 22:54:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.042 ************************************ 00:13:42.042 START TEST raid0_resize_test 00:13:42.042 ************************************ 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60995 00:13:42.042 Process raid pid: 60995 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60995' 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60995 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60995 ']' 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.042 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.043 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.043 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.043 22:54:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:42.043 22:54:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.043 [2024-12-09 22:54:57.773035] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:42.043 [2024-12-09 22:54:57.773159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.302 [2024-12-09 22:54:57.953513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.302 [2024-12-09 22:54:58.100580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.562 [2024-12-09 22:54:58.353518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.562 [2024-12-09 22:54:58.353589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.821 Base_1 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.821 Base_2 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.821 [2024-12-09 22:54:58.651733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:42.821 [2024-12-09 22:54:58.654094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:42.821 [2024-12-09 22:54:58.654167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.821 [2024-12-09 22:54:58.654182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:42.821 [2024-12-09 22:54:58.654501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:42.821 [2024-12-09 22:54:58.654646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.821 [2024-12-09 22:54:58.654656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:42.821 [2024-12-09 22:54:58.654830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.821 [2024-12-09 22:54:58.659698] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:42.821 [2024-12-09 22:54:58.659731] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:42.821 true 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:13:42.821 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.821 [2024-12-09 22:54:58.671908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.080 [2024-12-09 22:54:58.723673] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:43.080 [2024-12-09 22:54:58.723714] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:43.080 [2024-12-09 22:54:58.723761] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:43.080 true 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:13:43.080 [2024-12-09 22:54:58.735883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60995 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60995 ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60995 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60995 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60995' 00:13:43.080 killing process with pid 60995 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60995 00:13:43.080 22:54:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60995 00:13:43.080 [2024-12-09 22:54:58.818119] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.080 [2024-12-09 22:54:58.818247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.080 [2024-12-09 22:54:58.818315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.080 [2024-12-09 22:54:58.818327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:43.080 [2024-12-09 22:54:58.839692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.492 22:55:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:13:44.492 00:13:44.492 real 0m2.516s 00:13:44.492 user 0m2.567s 00:13:44.492 sys 0m0.448s 00:13:44.492 22:55:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.492 22:55:00 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 ************************************ 00:13:44.492 END TEST raid0_resize_test 00:13:44.492 ************************************ 00:13:44.492 22:55:00 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:13:44.492 22:55:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.492 22:55:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.492 22:55:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 ************************************ 00:13:44.492 START TEST raid1_resize_test 00:13:44.492 ************************************ 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61062 00:13:44.492 Process raid pid: 61062 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61062' 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61062 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61062 ']' 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.492 22:55:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.750 [2024-12-09 22:55:00.396443] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:44.750 [2024-12-09 22:55:00.396665] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.750 [2024-12-09 22:55:00.582365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.008 [2024-12-09 22:55:00.733215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.266 [2024-12-09 22:55:01.006570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.266 [2024-12-09 22:55:01.006634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.525 Base_1 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.525 Base_2 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.525 [2024-12-09 22:55:01.266985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:45.525 [2024-12-09 22:55:01.269431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:45.525 [2024-12-09 22:55:01.269528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:45.525 [2024-12-09 22:55:01.269544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:45.525 [2024-12-09 22:55:01.269857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:45.525 [2024-12-09 22:55:01.270036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:45.525 [2024-12-09 22:55:01.270053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:45.525 [2024-12-09 22:55:01.270251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.525 [2024-12-09 22:55:01.278939] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:45.525 [2024-12-09 22:55:01.278979] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:45.525 true 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.525 [2024-12-09 22:55:01.295107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.525 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.526 [2024-12-09 22:55:01.342884] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:45.526 [2024-12-09 22:55:01.342924] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:45.526 [2024-12-09 22:55:01.342957] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:13:45.526 true 00:13:45.526 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.526 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:45.526 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:13:45.526 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.526 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.526 [2024-12-09 22:55:01.358992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.526 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61062 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61062 ']' 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 61062 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61062 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.784 killing process with pid 61062 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61062' 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 61062 00:13:45.784 [2024-12-09 22:55:01.443623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.784 [2024-12-09 22:55:01.443753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.784 22:55:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 61062 00:13:45.784 [2024-12-09 22:55:01.444399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.784 [2024-12-09 22:55:01.444441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:45.784 [2024-12-09 22:55:01.466152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.314 22:55:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:13:47.314 00:13:47.314 real 0m2.554s 00:13:47.314 user 0m2.608s 00:13:47.314 sys 0m0.493s 00:13:47.314 22:55:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.314 22:55:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.314 ************************************ 00:13:47.314 END TEST raid1_resize_test 00:13:47.314 ************************************ 00:13:47.314 22:55:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:47.314 22:55:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:47.314 22:55:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:47.314 22:55:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:47.314 22:55:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.314 22:55:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.314 ************************************ 00:13:47.314 START TEST raid_state_function_test 00:13:47.314 ************************************ 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61119 00:13:47.314 Process raid pid: 61119 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61119' 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61119 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61119 ']' 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.314 22:55:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.314 [2024-12-09 22:55:02.987633] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:47.314 [2024-12-09 22:55:02.987783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.314 [2024-12-09 22:55:03.168011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.573 [2024-12-09 22:55:03.317227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.831 [2024-12-09 22:55:03.588256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.831 [2024-12-09 22:55:03.588324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.089 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.090 [2024-12-09 22:55:03.866059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.090 [2024-12-09 22:55:03.866127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.090 [2024-12-09 22:55:03.866139] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.090 [2024-12-09 22:55:03.866150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.090 "name": "Existed_Raid", 00:13:48.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.090 "strip_size_kb": 64, 00:13:48.090 "state": "configuring", 00:13:48.090 "raid_level": "raid0", 00:13:48.090 "superblock": false, 00:13:48.090 "num_base_bdevs": 2, 00:13:48.090 "num_base_bdevs_discovered": 0, 00:13:48.090 "num_base_bdevs_operational": 2, 00:13:48.090 "base_bdevs_list": [ 00:13:48.090 { 00:13:48.090 "name": "BaseBdev1", 00:13:48.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.090 "is_configured": false, 00:13:48.090 "data_offset": 0, 00:13:48.090 "data_size": 0 00:13:48.090 }, 00:13:48.090 { 00:13:48.090 "name": "BaseBdev2", 00:13:48.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.090 "is_configured": false, 00:13:48.090 "data_offset": 0, 00:13:48.090 "data_size": 0 00:13:48.090 } 00:13:48.090 ] 00:13:48.090 }' 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.090 22:55:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 [2024-12-09 22:55:04.333353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.656 [2024-12-09 22:55:04.333403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 [2024-12-09 22:55:04.345309] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.656 [2024-12-09 22:55:04.345361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.656 [2024-12-09 22:55:04.345373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.656 [2024-12-09 22:55:04.345387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 [2024-12-09 22:55:04.402306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.656 BaseBdev1 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.656 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 [ 00:13:48.656 { 00:13:48.656 "name": "BaseBdev1", 00:13:48.656 "aliases": [ 00:13:48.656 "10205e56-50ad-4f23-88ab-88bc2794c3d8" 00:13:48.656 ], 00:13:48.656 "product_name": "Malloc disk", 00:13:48.656 "block_size": 512, 00:13:48.656 "num_blocks": 65536, 00:13:48.656 "uuid": "10205e56-50ad-4f23-88ab-88bc2794c3d8", 00:13:48.656 "assigned_rate_limits": { 00:13:48.656 "rw_ios_per_sec": 0, 00:13:48.656 "rw_mbytes_per_sec": 0, 00:13:48.656 "r_mbytes_per_sec": 0, 00:13:48.656 "w_mbytes_per_sec": 0 00:13:48.656 }, 00:13:48.656 "claimed": true, 00:13:48.657 "claim_type": "exclusive_write", 00:13:48.657 "zoned": false, 00:13:48.657 "supported_io_types": { 00:13:48.657 "read": true, 00:13:48.657 "write": true, 00:13:48.657 "unmap": true, 00:13:48.657 "flush": true, 00:13:48.657 "reset": true, 00:13:48.657 "nvme_admin": false, 00:13:48.657 "nvme_io": false, 00:13:48.657 "nvme_io_md": false, 00:13:48.657 "write_zeroes": true, 00:13:48.657 "zcopy": true, 00:13:48.657 "get_zone_info": false, 00:13:48.657 "zone_management": false, 00:13:48.657 "zone_append": false, 00:13:48.657 "compare": false, 00:13:48.657 "compare_and_write": false, 00:13:48.657 "abort": true, 00:13:48.657 "seek_hole": false, 00:13:48.657 "seek_data": false, 00:13:48.657 "copy": true, 00:13:48.657 "nvme_iov_md": false 00:13:48.657 }, 00:13:48.657 "memory_domains": [ 00:13:48.657 { 00:13:48.657 "dma_device_id": "system", 00:13:48.657 "dma_device_type": 1 00:13:48.657 }, 00:13:48.657 { 00:13:48.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.657 "dma_device_type": 2 00:13:48.657 } 00:13:48.657 ], 00:13:48.657 "driver_specific": {} 00:13:48.657 } 00:13:48.657 ] 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.657 "name": "Existed_Raid", 00:13:48.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.657 "strip_size_kb": 64, 00:13:48.657 "state": "configuring", 00:13:48.657 "raid_level": "raid0", 00:13:48.657 "superblock": false, 00:13:48.657 "num_base_bdevs": 2, 00:13:48.657 "num_base_bdevs_discovered": 1, 00:13:48.657 "num_base_bdevs_operational": 2, 00:13:48.657 "base_bdevs_list": [ 00:13:48.657 { 00:13:48.657 "name": "BaseBdev1", 00:13:48.657 "uuid": "10205e56-50ad-4f23-88ab-88bc2794c3d8", 00:13:48.657 "is_configured": true, 00:13:48.657 "data_offset": 0, 00:13:48.657 "data_size": 65536 00:13:48.657 }, 00:13:48.657 { 00:13:48.657 "name": "BaseBdev2", 00:13:48.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.657 "is_configured": false, 00:13:48.657 "data_offset": 0, 00:13:48.657 "data_size": 0 00:13:48.657 } 00:13:48.657 ] 00:13:48.657 }' 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.657 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.224 [2024-12-09 22:55:04.893596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.224 [2024-12-09 22:55:04.893670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.224 [2024-12-09 22:55:04.905609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.224 [2024-12-09 22:55:04.907908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.224 [2024-12-09 22:55:04.907948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.224 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.225 "name": "Existed_Raid", 00:13:49.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.225 "strip_size_kb": 64, 00:13:49.225 "state": "configuring", 00:13:49.225 "raid_level": "raid0", 00:13:49.225 "superblock": false, 00:13:49.225 "num_base_bdevs": 2, 00:13:49.225 "num_base_bdevs_discovered": 1, 00:13:49.225 "num_base_bdevs_operational": 2, 00:13:49.225 "base_bdevs_list": [ 00:13:49.225 { 00:13:49.225 "name": "BaseBdev1", 00:13:49.225 "uuid": "10205e56-50ad-4f23-88ab-88bc2794c3d8", 00:13:49.225 "is_configured": true, 00:13:49.225 "data_offset": 0, 00:13:49.225 "data_size": 65536 00:13:49.225 }, 00:13:49.225 { 00:13:49.225 "name": "BaseBdev2", 00:13:49.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.225 "is_configured": false, 00:13:49.225 "data_offset": 0, 00:13:49.225 "data_size": 0 00:13:49.225 } 00:13:49.225 ] 00:13:49.225 }' 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.225 22:55:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.483 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.483 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.483 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.742 [2024-12-09 22:55:05.375621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.742 [2024-12-09 22:55:05.375726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:49.742 [2024-12-09 22:55:05.375738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:49.742 [2024-12-09 22:55:05.376079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:49.742 [2024-12-09 22:55:05.376323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:49.742 [2024-12-09 22:55:05.376342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:49.742 [2024-12-09 22:55:05.376757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.742 BaseBdev2 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.742 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.742 [ 00:13:49.742 { 00:13:49.742 "name": "BaseBdev2", 00:13:49.742 "aliases": [ 00:13:49.742 "389095cb-de9d-4f4d-b9fa-947e6499c5d8" 00:13:49.742 ], 00:13:49.742 "product_name": "Malloc disk", 00:13:49.742 "block_size": 512, 00:13:49.742 "num_blocks": 65536, 00:13:49.742 "uuid": "389095cb-de9d-4f4d-b9fa-947e6499c5d8", 00:13:49.742 "assigned_rate_limits": { 00:13:49.742 "rw_ios_per_sec": 0, 00:13:49.742 "rw_mbytes_per_sec": 0, 00:13:49.742 "r_mbytes_per_sec": 0, 00:13:49.742 "w_mbytes_per_sec": 0 00:13:49.742 }, 00:13:49.742 "claimed": true, 00:13:49.742 "claim_type": "exclusive_write", 00:13:49.742 "zoned": false, 00:13:49.742 "supported_io_types": { 00:13:49.742 "read": true, 00:13:49.742 "write": true, 00:13:49.743 "unmap": true, 00:13:49.743 "flush": true, 00:13:49.743 "reset": true, 00:13:49.743 "nvme_admin": false, 00:13:49.743 "nvme_io": false, 00:13:49.743 "nvme_io_md": false, 00:13:49.743 "write_zeroes": true, 00:13:49.743 "zcopy": true, 00:13:49.743 "get_zone_info": false, 00:13:49.743 "zone_management": false, 00:13:49.743 "zone_append": false, 00:13:49.743 "compare": false, 00:13:49.743 "compare_and_write": false, 00:13:49.743 "abort": true, 00:13:49.743 "seek_hole": false, 00:13:49.743 "seek_data": false, 00:13:49.743 "copy": true, 00:13:49.743 "nvme_iov_md": false 00:13:49.743 }, 00:13:49.743 "memory_domains": [ 00:13:49.743 { 00:13:49.743 "dma_device_id": "system", 00:13:49.743 "dma_device_type": 1 00:13:49.743 }, 00:13:49.743 { 00:13:49.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.743 "dma_device_type": 2 00:13:49.743 } 00:13:49.743 ], 00:13:49.743 "driver_specific": {} 00:13:49.743 } 00:13:49.743 ] 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.743 "name": "Existed_Raid", 00:13:49.743 "uuid": "3ebe2bda-4271-4fea-af51-bc3c2cf9f3f2", 00:13:49.743 "strip_size_kb": 64, 00:13:49.743 "state": "online", 00:13:49.743 "raid_level": "raid0", 00:13:49.743 "superblock": false, 00:13:49.743 "num_base_bdevs": 2, 00:13:49.743 "num_base_bdevs_discovered": 2, 00:13:49.743 "num_base_bdevs_operational": 2, 00:13:49.743 "base_bdevs_list": [ 00:13:49.743 { 00:13:49.743 "name": "BaseBdev1", 00:13:49.743 "uuid": "10205e56-50ad-4f23-88ab-88bc2794c3d8", 00:13:49.743 "is_configured": true, 00:13:49.743 "data_offset": 0, 00:13:49.743 "data_size": 65536 00:13:49.743 }, 00:13:49.743 { 00:13:49.743 "name": "BaseBdev2", 00:13:49.743 "uuid": "389095cb-de9d-4f4d-b9fa-947e6499c5d8", 00:13:49.743 "is_configured": true, 00:13:49.743 "data_offset": 0, 00:13:49.743 "data_size": 65536 00:13:49.743 } 00:13:49.743 ] 00:13:49.743 }' 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.743 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.001 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:50.001 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:50.001 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:50.001 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:50.001 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 [2024-12-09 22:55:05.867154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:50.260 "name": "Existed_Raid", 00:13:50.260 "aliases": [ 00:13:50.260 "3ebe2bda-4271-4fea-af51-bc3c2cf9f3f2" 00:13:50.260 ], 00:13:50.260 "product_name": "Raid Volume", 00:13:50.260 "block_size": 512, 00:13:50.260 "num_blocks": 131072, 00:13:50.260 "uuid": "3ebe2bda-4271-4fea-af51-bc3c2cf9f3f2", 00:13:50.260 "assigned_rate_limits": { 00:13:50.260 "rw_ios_per_sec": 0, 00:13:50.260 "rw_mbytes_per_sec": 0, 00:13:50.260 "r_mbytes_per_sec": 0, 00:13:50.260 "w_mbytes_per_sec": 0 00:13:50.260 }, 00:13:50.260 "claimed": false, 00:13:50.260 "zoned": false, 00:13:50.260 "supported_io_types": { 00:13:50.260 "read": true, 00:13:50.260 "write": true, 00:13:50.260 "unmap": true, 00:13:50.260 "flush": true, 00:13:50.260 "reset": true, 00:13:50.260 "nvme_admin": false, 00:13:50.260 "nvme_io": false, 00:13:50.260 "nvme_io_md": false, 00:13:50.260 "write_zeroes": true, 00:13:50.260 "zcopy": false, 00:13:50.260 "get_zone_info": false, 00:13:50.260 "zone_management": false, 00:13:50.260 "zone_append": false, 00:13:50.260 "compare": false, 00:13:50.260 "compare_and_write": false, 00:13:50.260 "abort": false, 00:13:50.260 "seek_hole": false, 00:13:50.260 "seek_data": false, 00:13:50.260 "copy": false, 00:13:50.260 "nvme_iov_md": false 00:13:50.260 }, 00:13:50.260 "memory_domains": [ 00:13:50.260 { 00:13:50.260 "dma_device_id": "system", 00:13:50.260 "dma_device_type": 1 00:13:50.260 }, 00:13:50.260 { 00:13:50.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.260 "dma_device_type": 2 00:13:50.260 }, 00:13:50.260 { 00:13:50.260 "dma_device_id": "system", 00:13:50.260 "dma_device_type": 1 00:13:50.260 }, 00:13:50.260 { 00:13:50.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.260 "dma_device_type": 2 00:13:50.260 } 00:13:50.260 ], 00:13:50.260 "driver_specific": { 00:13:50.260 "raid": { 00:13:50.260 "uuid": "3ebe2bda-4271-4fea-af51-bc3c2cf9f3f2", 00:13:50.260 "strip_size_kb": 64, 00:13:50.260 "state": "online", 00:13:50.260 "raid_level": "raid0", 00:13:50.260 "superblock": false, 00:13:50.260 "num_base_bdevs": 2, 00:13:50.260 "num_base_bdevs_discovered": 2, 00:13:50.260 "num_base_bdevs_operational": 2, 00:13:50.260 "base_bdevs_list": [ 00:13:50.260 { 00:13:50.260 "name": "BaseBdev1", 00:13:50.260 "uuid": "10205e56-50ad-4f23-88ab-88bc2794c3d8", 00:13:50.260 "is_configured": true, 00:13:50.260 "data_offset": 0, 00:13:50.260 "data_size": 65536 00:13:50.260 }, 00:13:50.260 { 00:13:50.260 "name": "BaseBdev2", 00:13:50.260 "uuid": "389095cb-de9d-4f4d-b9fa-947e6499c5d8", 00:13:50.260 "is_configured": true, 00:13:50.260 "data_offset": 0, 00:13:50.260 "data_size": 65536 00:13:50.260 } 00:13:50.260 ] 00:13:50.260 } 00:13:50.260 } 00:13:50.260 }' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:50.260 BaseBdev2' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.260 22:55:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.260 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 [2024-12-09 22:55:06.066638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.260 [2024-12-09 22:55:06.066686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.260 [2024-12-09 22:55:06.066750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.520 "name": "Existed_Raid", 00:13:50.520 "uuid": "3ebe2bda-4271-4fea-af51-bc3c2cf9f3f2", 00:13:50.520 "strip_size_kb": 64, 00:13:50.520 "state": "offline", 00:13:50.520 "raid_level": "raid0", 00:13:50.520 "superblock": false, 00:13:50.520 "num_base_bdevs": 2, 00:13:50.520 "num_base_bdevs_discovered": 1, 00:13:50.520 "num_base_bdevs_operational": 1, 00:13:50.520 "base_bdevs_list": [ 00:13:50.520 { 00:13:50.520 "name": null, 00:13:50.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.520 "is_configured": false, 00:13:50.520 "data_offset": 0, 00:13:50.520 "data_size": 65536 00:13:50.520 }, 00:13:50.520 { 00:13:50.520 "name": "BaseBdev2", 00:13:50.520 "uuid": "389095cb-de9d-4f4d-b9fa-947e6499c5d8", 00:13:50.520 "is_configured": true, 00:13:50.520 "data_offset": 0, 00:13:50.520 "data_size": 65536 00:13:50.520 } 00:13:50.520 ] 00:13:50.520 }' 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.520 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.784 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.785 [2024-12-09 22:55:06.636870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.785 [2024-12-09 22:55:06.636952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61119 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61119 ']' 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61119 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61119 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.044 killing process with pid 61119 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61119' 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61119 00:13:51.044 [2024-12-09 22:55:06.847140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.044 22:55:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61119 00:13:51.044 [2024-12-09 22:55:06.866092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:52.423 00:13:52.423 real 0m5.278s 00:13:52.423 user 0m7.370s 00:13:52.423 sys 0m0.981s 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 ************************************ 00:13:52.423 END TEST raid_state_function_test 00:13:52.423 ************************************ 00:13:52.423 22:55:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:52.423 22:55:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:52.423 22:55:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.423 22:55:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 ************************************ 00:13:52.423 START TEST raid_state_function_test_sb 00:13:52.423 ************************************ 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61372 00:13:52.423 Process raid pid: 61372 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61372' 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61372 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61372 ']' 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.423 22:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.683 [2024-12-09 22:55:08.329392] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:52.683 [2024-12-09 22:55:08.329523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.683 [2024-12-09 22:55:08.506047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.942 [2024-12-09 22:55:08.662173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.201 [2024-12-09 22:55:08.925195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.201 [2024-12-09 22:55:08.925254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.461 [2024-12-09 22:55:09.180159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.461 [2024-12-09 22:55:09.180228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.461 [2024-12-09 22:55:09.180240] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.461 [2024-12-09 22:55:09.180252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.461 "name": "Existed_Raid", 00:13:53.461 "uuid": "2cc54d8e-3962-4b9b-8698-69ea65e7eb24", 00:13:53.461 "strip_size_kb": 64, 00:13:53.461 "state": "configuring", 00:13:53.461 "raid_level": "raid0", 00:13:53.461 "superblock": true, 00:13:53.461 "num_base_bdevs": 2, 00:13:53.461 "num_base_bdevs_discovered": 0, 00:13:53.461 "num_base_bdevs_operational": 2, 00:13:53.461 "base_bdevs_list": [ 00:13:53.461 { 00:13:53.461 "name": "BaseBdev1", 00:13:53.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.461 "is_configured": false, 00:13:53.461 "data_offset": 0, 00:13:53.461 "data_size": 0 00:13:53.461 }, 00:13:53.461 { 00:13:53.461 "name": "BaseBdev2", 00:13:53.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.461 "is_configured": false, 00:13:53.461 "data_offset": 0, 00:13:53.461 "data_size": 0 00:13:53.461 } 00:13:53.461 ] 00:13:53.461 }' 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.461 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 [2024-12-09 22:55:09.611413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.031 [2024-12-09 22:55:09.611491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 [2024-12-09 22:55:09.623425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.031 [2024-12-09 22:55:09.623491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.031 [2024-12-09 22:55:09.623502] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.031 [2024-12-09 22:55:09.623517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 [2024-12-09 22:55:09.683051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.031 BaseBdev1 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.031 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 [ 00:13:54.031 { 00:13:54.031 "name": "BaseBdev1", 00:13:54.031 "aliases": [ 00:13:54.031 "4d033855-e4c2-4730-917c-4a2cfe971473" 00:13:54.031 ], 00:13:54.031 "product_name": "Malloc disk", 00:13:54.031 "block_size": 512, 00:13:54.031 "num_blocks": 65536, 00:13:54.031 "uuid": "4d033855-e4c2-4730-917c-4a2cfe971473", 00:13:54.031 "assigned_rate_limits": { 00:13:54.031 "rw_ios_per_sec": 0, 00:13:54.031 "rw_mbytes_per_sec": 0, 00:13:54.031 "r_mbytes_per_sec": 0, 00:13:54.031 "w_mbytes_per_sec": 0 00:13:54.031 }, 00:13:54.031 "claimed": true, 00:13:54.031 "claim_type": "exclusive_write", 00:13:54.031 "zoned": false, 00:13:54.031 "supported_io_types": { 00:13:54.031 "read": true, 00:13:54.031 "write": true, 00:13:54.031 "unmap": true, 00:13:54.032 "flush": true, 00:13:54.032 "reset": true, 00:13:54.032 "nvme_admin": false, 00:13:54.032 "nvme_io": false, 00:13:54.032 "nvme_io_md": false, 00:13:54.032 "write_zeroes": true, 00:13:54.032 "zcopy": true, 00:13:54.032 "get_zone_info": false, 00:13:54.032 "zone_management": false, 00:13:54.032 "zone_append": false, 00:13:54.032 "compare": false, 00:13:54.032 "compare_and_write": false, 00:13:54.032 "abort": true, 00:13:54.032 "seek_hole": false, 00:13:54.032 "seek_data": false, 00:13:54.032 "copy": true, 00:13:54.032 "nvme_iov_md": false 00:13:54.032 }, 00:13:54.032 "memory_domains": [ 00:13:54.032 { 00:13:54.032 "dma_device_id": "system", 00:13:54.032 "dma_device_type": 1 00:13:54.032 }, 00:13:54.032 { 00:13:54.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.032 "dma_device_type": 2 00:13:54.032 } 00:13:54.032 ], 00:13:54.032 "driver_specific": {} 00:13:54.032 } 00:13:54.032 ] 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.032 "name": "Existed_Raid", 00:13:54.032 "uuid": "6a527f28-a112-4fe3-b8ae-9a04561cf1e1", 00:13:54.032 "strip_size_kb": 64, 00:13:54.032 "state": "configuring", 00:13:54.032 "raid_level": "raid0", 00:13:54.032 "superblock": true, 00:13:54.032 "num_base_bdevs": 2, 00:13:54.032 "num_base_bdevs_discovered": 1, 00:13:54.032 "num_base_bdevs_operational": 2, 00:13:54.032 "base_bdevs_list": [ 00:13:54.032 { 00:13:54.032 "name": "BaseBdev1", 00:13:54.032 "uuid": "4d033855-e4c2-4730-917c-4a2cfe971473", 00:13:54.032 "is_configured": true, 00:13:54.032 "data_offset": 2048, 00:13:54.032 "data_size": 63488 00:13:54.032 }, 00:13:54.032 { 00:13:54.032 "name": "BaseBdev2", 00:13:54.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.032 "is_configured": false, 00:13:54.032 "data_offset": 0, 00:13:54.032 "data_size": 0 00:13:54.032 } 00:13:54.032 ] 00:13:54.032 }' 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.032 22:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.600 [2024-12-09 22:55:10.166339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.600 [2024-12-09 22:55:10.166409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.600 [2024-12-09 22:55:10.174365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.600 [2024-12-09 22:55:10.176603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.600 [2024-12-09 22:55:10.176645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.600 "name": "Existed_Raid", 00:13:54.600 "uuid": "c82fb3a5-661b-4030-8f20-58c56e85fc19", 00:13:54.600 "strip_size_kb": 64, 00:13:54.600 "state": "configuring", 00:13:54.600 "raid_level": "raid0", 00:13:54.600 "superblock": true, 00:13:54.600 "num_base_bdevs": 2, 00:13:54.600 "num_base_bdevs_discovered": 1, 00:13:54.600 "num_base_bdevs_operational": 2, 00:13:54.600 "base_bdevs_list": [ 00:13:54.600 { 00:13:54.600 "name": "BaseBdev1", 00:13:54.600 "uuid": "4d033855-e4c2-4730-917c-4a2cfe971473", 00:13:54.600 "is_configured": true, 00:13:54.600 "data_offset": 2048, 00:13:54.600 "data_size": 63488 00:13:54.600 }, 00:13:54.600 { 00:13:54.600 "name": "BaseBdev2", 00:13:54.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.600 "is_configured": false, 00:13:54.600 "data_offset": 0, 00:13:54.600 "data_size": 0 00:13:54.600 } 00:13:54.600 ] 00:13:54.600 }' 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.600 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 [2024-12-09 22:55:10.696287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.860 [2024-12-09 22:55:10.696660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:54.860 [2024-12-09 22:55:10.696681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:54.860 [2024-12-09 22:55:10.697002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:54.860 [2024-12-09 22:55:10.697222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:54.860 [2024-12-09 22:55:10.697245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:54.860 BaseBdev2 00:13:54.860 [2024-12-09 22:55:10.697424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.860 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.170 [ 00:13:55.170 { 00:13:55.170 "name": "BaseBdev2", 00:13:55.170 "aliases": [ 00:13:55.170 "3fdb83aa-2b4c-485e-a878-ff21b0e33e8e" 00:13:55.170 ], 00:13:55.170 "product_name": "Malloc disk", 00:13:55.170 "block_size": 512, 00:13:55.170 "num_blocks": 65536, 00:13:55.170 "uuid": "3fdb83aa-2b4c-485e-a878-ff21b0e33e8e", 00:13:55.170 "assigned_rate_limits": { 00:13:55.170 "rw_ios_per_sec": 0, 00:13:55.170 "rw_mbytes_per_sec": 0, 00:13:55.170 "r_mbytes_per_sec": 0, 00:13:55.170 "w_mbytes_per_sec": 0 00:13:55.170 }, 00:13:55.170 "claimed": true, 00:13:55.170 "claim_type": "exclusive_write", 00:13:55.170 "zoned": false, 00:13:55.170 "supported_io_types": { 00:13:55.170 "read": true, 00:13:55.170 "write": true, 00:13:55.170 "unmap": true, 00:13:55.170 "flush": true, 00:13:55.170 "reset": true, 00:13:55.170 "nvme_admin": false, 00:13:55.170 "nvme_io": false, 00:13:55.170 "nvme_io_md": false, 00:13:55.170 "write_zeroes": true, 00:13:55.170 "zcopy": true, 00:13:55.170 "get_zone_info": false, 00:13:55.170 "zone_management": false, 00:13:55.170 "zone_append": false, 00:13:55.170 "compare": false, 00:13:55.170 "compare_and_write": false, 00:13:55.170 "abort": true, 00:13:55.170 "seek_hole": false, 00:13:55.170 "seek_data": false, 00:13:55.170 "copy": true, 00:13:55.170 "nvme_iov_md": false 00:13:55.170 }, 00:13:55.170 "memory_domains": [ 00:13:55.170 { 00:13:55.170 "dma_device_id": "system", 00:13:55.170 "dma_device_type": 1 00:13:55.170 }, 00:13:55.170 { 00:13:55.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.170 "dma_device_type": 2 00:13:55.170 } 00:13:55.170 ], 00:13:55.170 "driver_specific": {} 00:13:55.170 } 00:13:55.170 ] 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.170 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.171 "name": "Existed_Raid", 00:13:55.171 "uuid": "c82fb3a5-661b-4030-8f20-58c56e85fc19", 00:13:55.171 "strip_size_kb": 64, 00:13:55.171 "state": "online", 00:13:55.171 "raid_level": "raid0", 00:13:55.171 "superblock": true, 00:13:55.171 "num_base_bdevs": 2, 00:13:55.171 "num_base_bdevs_discovered": 2, 00:13:55.171 "num_base_bdevs_operational": 2, 00:13:55.171 "base_bdevs_list": [ 00:13:55.171 { 00:13:55.171 "name": "BaseBdev1", 00:13:55.171 "uuid": "4d033855-e4c2-4730-917c-4a2cfe971473", 00:13:55.171 "is_configured": true, 00:13:55.171 "data_offset": 2048, 00:13:55.171 "data_size": 63488 00:13:55.171 }, 00:13:55.171 { 00:13:55.171 "name": "BaseBdev2", 00:13:55.171 "uuid": "3fdb83aa-2b4c-485e-a878-ff21b0e33e8e", 00:13:55.171 "is_configured": true, 00:13:55.171 "data_offset": 2048, 00:13:55.171 "data_size": 63488 00:13:55.171 } 00:13:55.171 ] 00:13:55.171 }' 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.171 22:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.451 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.452 [2024-12-09 22:55:11.187864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.452 "name": "Existed_Raid", 00:13:55.452 "aliases": [ 00:13:55.452 "c82fb3a5-661b-4030-8f20-58c56e85fc19" 00:13:55.452 ], 00:13:55.452 "product_name": "Raid Volume", 00:13:55.452 "block_size": 512, 00:13:55.452 "num_blocks": 126976, 00:13:55.452 "uuid": "c82fb3a5-661b-4030-8f20-58c56e85fc19", 00:13:55.452 "assigned_rate_limits": { 00:13:55.452 "rw_ios_per_sec": 0, 00:13:55.452 "rw_mbytes_per_sec": 0, 00:13:55.452 "r_mbytes_per_sec": 0, 00:13:55.452 "w_mbytes_per_sec": 0 00:13:55.452 }, 00:13:55.452 "claimed": false, 00:13:55.452 "zoned": false, 00:13:55.452 "supported_io_types": { 00:13:55.452 "read": true, 00:13:55.452 "write": true, 00:13:55.452 "unmap": true, 00:13:55.452 "flush": true, 00:13:55.452 "reset": true, 00:13:55.452 "nvme_admin": false, 00:13:55.452 "nvme_io": false, 00:13:55.452 "nvme_io_md": false, 00:13:55.452 "write_zeroes": true, 00:13:55.452 "zcopy": false, 00:13:55.452 "get_zone_info": false, 00:13:55.452 "zone_management": false, 00:13:55.452 "zone_append": false, 00:13:55.452 "compare": false, 00:13:55.452 "compare_and_write": false, 00:13:55.452 "abort": false, 00:13:55.452 "seek_hole": false, 00:13:55.452 "seek_data": false, 00:13:55.452 "copy": false, 00:13:55.452 "nvme_iov_md": false 00:13:55.452 }, 00:13:55.452 "memory_domains": [ 00:13:55.452 { 00:13:55.452 "dma_device_id": "system", 00:13:55.452 "dma_device_type": 1 00:13:55.452 }, 00:13:55.452 { 00:13:55.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.452 "dma_device_type": 2 00:13:55.452 }, 00:13:55.452 { 00:13:55.452 "dma_device_id": "system", 00:13:55.452 "dma_device_type": 1 00:13:55.452 }, 00:13:55.452 { 00:13:55.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.452 "dma_device_type": 2 00:13:55.452 } 00:13:55.452 ], 00:13:55.452 "driver_specific": { 00:13:55.452 "raid": { 00:13:55.452 "uuid": "c82fb3a5-661b-4030-8f20-58c56e85fc19", 00:13:55.452 "strip_size_kb": 64, 00:13:55.452 "state": "online", 00:13:55.452 "raid_level": "raid0", 00:13:55.452 "superblock": true, 00:13:55.452 "num_base_bdevs": 2, 00:13:55.452 "num_base_bdevs_discovered": 2, 00:13:55.452 "num_base_bdevs_operational": 2, 00:13:55.452 "base_bdevs_list": [ 00:13:55.452 { 00:13:55.452 "name": "BaseBdev1", 00:13:55.452 "uuid": "4d033855-e4c2-4730-917c-4a2cfe971473", 00:13:55.452 "is_configured": true, 00:13:55.452 "data_offset": 2048, 00:13:55.452 "data_size": 63488 00:13:55.452 }, 00:13:55.452 { 00:13:55.452 "name": "BaseBdev2", 00:13:55.452 "uuid": "3fdb83aa-2b4c-485e-a878-ff21b0e33e8e", 00:13:55.452 "is_configured": true, 00:13:55.452 "data_offset": 2048, 00:13:55.452 "data_size": 63488 00:13:55.452 } 00:13:55.452 ] 00:13:55.452 } 00:13:55.452 } 00:13:55.452 }' 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:55.452 BaseBdev2' 00:13:55.452 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.712 [2024-12-09 22:55:11.415185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.712 [2024-12-09 22:55:11.415230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.712 [2024-12-09 22:55:11.415299] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.712 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.971 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.971 "name": "Existed_Raid", 00:13:55.971 "uuid": "c82fb3a5-661b-4030-8f20-58c56e85fc19", 00:13:55.971 "strip_size_kb": 64, 00:13:55.971 "state": "offline", 00:13:55.971 "raid_level": "raid0", 00:13:55.971 "superblock": true, 00:13:55.971 "num_base_bdevs": 2, 00:13:55.971 "num_base_bdevs_discovered": 1, 00:13:55.971 "num_base_bdevs_operational": 1, 00:13:55.971 "base_bdevs_list": [ 00:13:55.971 { 00:13:55.971 "name": null, 00:13:55.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.971 "is_configured": false, 00:13:55.971 "data_offset": 0, 00:13:55.971 "data_size": 63488 00:13:55.971 }, 00:13:55.971 { 00:13:55.971 "name": "BaseBdev2", 00:13:55.971 "uuid": "3fdb83aa-2b4c-485e-a878-ff21b0e33e8e", 00:13:55.971 "is_configured": true, 00:13:55.971 "data_offset": 2048, 00:13:55.971 "data_size": 63488 00:13:55.971 } 00:13:55.971 ] 00:13:55.971 }' 00:13:55.971 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.971 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.231 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:56.231 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.231 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.231 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.231 22:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.231 22:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.231 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.231 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.231 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.231 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:56.231 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.231 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.231 [2024-12-09 22:55:12.033823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.231 [2024-12-09 22:55:12.033902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61372 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61372 ']' 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61372 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61372 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.491 killing process with pid 61372 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61372' 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61372 00:13:56.491 22:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61372 00:13:56.491 [2024-12-09 22:55:12.238237] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.491 [2024-12-09 22:55:12.257784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.871 22:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:57.871 00:13:57.871 real 0m5.379s 00:13:57.871 user 0m7.504s 00:13:57.871 sys 0m0.958s 00:13:57.871 22:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.871 22:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.871 ************************************ 00:13:57.871 END TEST raid_state_function_test_sb 00:13:57.871 ************************************ 00:13:57.871 22:55:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:57.871 22:55:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:57.871 22:55:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.871 22:55:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.871 ************************************ 00:13:57.871 START TEST raid_superblock_test 00:13:57.871 ************************************ 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61630 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61630 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:57.871 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61630 ']' 00:13:57.872 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.872 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.872 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.872 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.872 22:55:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.131 [2024-12-09 22:55:13.785253] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:13:58.132 [2024-12-09 22:55:13.785406] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61630 ] 00:13:58.132 [2024-12-09 22:55:13.958304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.391 [2024-12-09 22:55:14.105109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.649 [2024-12-09 22:55:14.368473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.649 [2024-12-09 22:55:14.368539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:58.908 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:58.909 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:58.909 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:58.909 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:58.909 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.909 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.168 malloc1 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.168 [2024-12-09 22:55:14.790361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.168 [2024-12-09 22:55:14.790499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.168 [2024-12-09 22:55:14.790540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.168 [2024-12-09 22:55:14.790558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.168 [2024-12-09 22:55:14.793543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.168 [2024-12-09 22:55:14.793594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.168 pt1 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.168 malloc2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.168 [2024-12-09 22:55:14.850978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:59.168 [2024-12-09 22:55:14.851071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.168 [2024-12-09 22:55:14.851103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.168 [2024-12-09 22:55:14.851118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.168 [2024-12-09 22:55:14.853903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.168 [2024-12-09 22:55:14.853951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:59.168 pt2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.168 [2024-12-09 22:55:14.863028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.168 [2024-12-09 22:55:14.865421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:59.168 [2024-12-09 22:55:14.865663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:59.168 [2024-12-09 22:55:14.865683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:59.168 [2024-12-09 22:55:14.866054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:59.168 [2024-12-09 22:55:14.866291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:59.168 [2024-12-09 22:55:14.866324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:59.168 [2024-12-09 22:55:14.866574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.168 "name": "raid_bdev1", 00:13:59.168 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:13:59.168 "strip_size_kb": 64, 00:13:59.168 "state": "online", 00:13:59.168 "raid_level": "raid0", 00:13:59.168 "superblock": true, 00:13:59.168 "num_base_bdevs": 2, 00:13:59.168 "num_base_bdevs_discovered": 2, 00:13:59.168 "num_base_bdevs_operational": 2, 00:13:59.168 "base_bdevs_list": [ 00:13:59.168 { 00:13:59.168 "name": "pt1", 00:13:59.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.168 "is_configured": true, 00:13:59.168 "data_offset": 2048, 00:13:59.168 "data_size": 63488 00:13:59.168 }, 00:13:59.168 { 00:13:59.168 "name": "pt2", 00:13:59.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.168 "is_configured": true, 00:13:59.168 "data_offset": 2048, 00:13:59.168 "data_size": 63488 00:13:59.168 } 00:13:59.168 ] 00:13:59.168 }' 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.168 22:55:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.739 [2024-12-09 22:55:15.354482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.739 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.739 "name": "raid_bdev1", 00:13:59.739 "aliases": [ 00:13:59.739 "c199714c-e43f-4335-9161-f54377867cd9" 00:13:59.739 ], 00:13:59.739 "product_name": "Raid Volume", 00:13:59.739 "block_size": 512, 00:13:59.739 "num_blocks": 126976, 00:13:59.739 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:13:59.739 "assigned_rate_limits": { 00:13:59.739 "rw_ios_per_sec": 0, 00:13:59.739 "rw_mbytes_per_sec": 0, 00:13:59.739 "r_mbytes_per_sec": 0, 00:13:59.739 "w_mbytes_per_sec": 0 00:13:59.739 }, 00:13:59.739 "claimed": false, 00:13:59.739 "zoned": false, 00:13:59.739 "supported_io_types": { 00:13:59.739 "read": true, 00:13:59.739 "write": true, 00:13:59.739 "unmap": true, 00:13:59.739 "flush": true, 00:13:59.739 "reset": true, 00:13:59.739 "nvme_admin": false, 00:13:59.739 "nvme_io": false, 00:13:59.739 "nvme_io_md": false, 00:13:59.739 "write_zeroes": true, 00:13:59.739 "zcopy": false, 00:13:59.739 "get_zone_info": false, 00:13:59.739 "zone_management": false, 00:13:59.739 "zone_append": false, 00:13:59.739 "compare": false, 00:13:59.739 "compare_and_write": false, 00:13:59.739 "abort": false, 00:13:59.739 "seek_hole": false, 00:13:59.739 "seek_data": false, 00:13:59.739 "copy": false, 00:13:59.739 "nvme_iov_md": false 00:13:59.739 }, 00:13:59.739 "memory_domains": [ 00:13:59.739 { 00:13:59.739 "dma_device_id": "system", 00:13:59.739 "dma_device_type": 1 00:13:59.739 }, 00:13:59.739 { 00:13:59.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.739 "dma_device_type": 2 00:13:59.739 }, 00:13:59.739 { 00:13:59.739 "dma_device_id": "system", 00:13:59.739 "dma_device_type": 1 00:13:59.739 }, 00:13:59.739 { 00:13:59.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.739 "dma_device_type": 2 00:13:59.739 } 00:13:59.739 ], 00:13:59.740 "driver_specific": { 00:13:59.740 "raid": { 00:13:59.740 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:13:59.740 "strip_size_kb": 64, 00:13:59.740 "state": "online", 00:13:59.740 "raid_level": "raid0", 00:13:59.740 "superblock": true, 00:13:59.740 "num_base_bdevs": 2, 00:13:59.740 "num_base_bdevs_discovered": 2, 00:13:59.740 "num_base_bdevs_operational": 2, 00:13:59.740 "base_bdevs_list": [ 00:13:59.740 { 00:13:59.740 "name": "pt1", 00:13:59.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.740 "is_configured": true, 00:13:59.740 "data_offset": 2048, 00:13:59.740 "data_size": 63488 00:13:59.740 }, 00:13:59.740 { 00:13:59.740 "name": "pt2", 00:13:59.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.740 "is_configured": true, 00:13:59.740 "data_offset": 2048, 00:13:59.740 "data_size": 63488 00:13:59.740 } 00:13:59.740 ] 00:13:59.740 } 00:13:59.740 } 00:13:59.740 }' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:59.740 pt2' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:59.740 [2024-12-09 22:55:15.550063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c199714c-e43f-4335-9161-f54377867cd9 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c199714c-e43f-4335-9161-f54377867cd9 ']' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.740 [2024-12-09 22:55:15.585711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.740 [2024-12-09 22:55:15.585743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.740 [2024-12-09 22:55:15.585824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.740 [2024-12-09 22:55:15.585872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.740 [2024-12-09 22:55:15.585885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:59.740 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 [2024-12-09 22:55:15.709678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:59.999 [2024-12-09 22:55:15.711716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:59.999 [2024-12-09 22:55:15.711787] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:59.999 [2024-12-09 22:55:15.711841] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:59.999 [2024-12-09 22:55:15.711857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.999 [2024-12-09 22:55:15.711870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:59.999 request: 00:13:59.999 { 00:13:59.999 "name": "raid_bdev1", 00:13:59.999 "raid_level": "raid0", 00:13:59.999 "base_bdevs": [ 00:13:59.999 "malloc1", 00:13:59.999 "malloc2" 00:13:59.999 ], 00:13:59.999 "strip_size_kb": 64, 00:13:59.999 "superblock": false, 00:13:59.999 "method": "bdev_raid_create", 00:13:59.999 "req_id": 1 00:13:59.999 } 00:13:59.999 Got JSON-RPC error response 00:13:59.999 response: 00:13:59.999 { 00:13:59.999 "code": -17, 00:13:59.999 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:59.999 } 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.999 [2024-12-09 22:55:15.765471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.999 [2024-12-09 22:55:15.765579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.999 [2024-12-09 22:55:15.765614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:59.999 [2024-12-09 22:55:15.765646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.999 [2024-12-09 22:55:15.767862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.999 [2024-12-09 22:55:15.767934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.999 [2024-12-09 22:55:15.768055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:59.999 [2024-12-09 22:55:15.768134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.999 pt1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.999 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.000 "name": "raid_bdev1", 00:14:00.000 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:14:00.000 "strip_size_kb": 64, 00:14:00.000 "state": "configuring", 00:14:00.000 "raid_level": "raid0", 00:14:00.000 "superblock": true, 00:14:00.000 "num_base_bdevs": 2, 00:14:00.000 "num_base_bdevs_discovered": 1, 00:14:00.000 "num_base_bdevs_operational": 2, 00:14:00.000 "base_bdevs_list": [ 00:14:00.000 { 00:14:00.000 "name": "pt1", 00:14:00.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.000 "is_configured": true, 00:14:00.000 "data_offset": 2048, 00:14:00.000 "data_size": 63488 00:14:00.000 }, 00:14:00.000 { 00:14:00.000 "name": null, 00:14:00.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.000 "is_configured": false, 00:14:00.000 "data_offset": 2048, 00:14:00.000 "data_size": 63488 00:14:00.000 } 00:14:00.000 ] 00:14:00.000 }' 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.000 22:55:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.636 [2024-12-09 22:55:16.248676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.636 [2024-12-09 22:55:16.248838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.636 [2024-12-09 22:55:16.248868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:00.636 [2024-12-09 22:55:16.248898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.636 [2024-12-09 22:55:16.249399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.636 [2024-12-09 22:55:16.249423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.636 [2024-12-09 22:55:16.249534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.636 [2024-12-09 22:55:16.249567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.636 [2024-12-09 22:55:16.249698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:00.636 [2024-12-09 22:55:16.249719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:00.636 [2024-12-09 22:55:16.249986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:00.636 [2024-12-09 22:55:16.250137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:00.636 [2024-12-09 22:55:16.250147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:00.636 [2024-12-09 22:55:16.250305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.636 pt2 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.636 "name": "raid_bdev1", 00:14:00.636 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:14:00.636 "strip_size_kb": 64, 00:14:00.636 "state": "online", 00:14:00.636 "raid_level": "raid0", 00:14:00.636 "superblock": true, 00:14:00.636 "num_base_bdevs": 2, 00:14:00.636 "num_base_bdevs_discovered": 2, 00:14:00.636 "num_base_bdevs_operational": 2, 00:14:00.636 "base_bdevs_list": [ 00:14:00.636 { 00:14:00.636 "name": "pt1", 00:14:00.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.636 "is_configured": true, 00:14:00.636 "data_offset": 2048, 00:14:00.636 "data_size": 63488 00:14:00.636 }, 00:14:00.636 { 00:14:00.636 "name": "pt2", 00:14:00.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.636 "is_configured": true, 00:14:00.636 "data_offset": 2048, 00:14:00.636 "data_size": 63488 00:14:00.636 } 00:14:00.636 ] 00:14:00.636 }' 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.636 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.895 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.154 [2024-12-09 22:55:16.752837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.154 "name": "raid_bdev1", 00:14:01.154 "aliases": [ 00:14:01.154 "c199714c-e43f-4335-9161-f54377867cd9" 00:14:01.154 ], 00:14:01.154 "product_name": "Raid Volume", 00:14:01.154 "block_size": 512, 00:14:01.154 "num_blocks": 126976, 00:14:01.154 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:14:01.154 "assigned_rate_limits": { 00:14:01.154 "rw_ios_per_sec": 0, 00:14:01.154 "rw_mbytes_per_sec": 0, 00:14:01.154 "r_mbytes_per_sec": 0, 00:14:01.154 "w_mbytes_per_sec": 0 00:14:01.154 }, 00:14:01.154 "claimed": false, 00:14:01.154 "zoned": false, 00:14:01.154 "supported_io_types": { 00:14:01.154 "read": true, 00:14:01.154 "write": true, 00:14:01.154 "unmap": true, 00:14:01.154 "flush": true, 00:14:01.154 "reset": true, 00:14:01.154 "nvme_admin": false, 00:14:01.154 "nvme_io": false, 00:14:01.154 "nvme_io_md": false, 00:14:01.154 "write_zeroes": true, 00:14:01.154 "zcopy": false, 00:14:01.154 "get_zone_info": false, 00:14:01.154 "zone_management": false, 00:14:01.154 "zone_append": false, 00:14:01.154 "compare": false, 00:14:01.154 "compare_and_write": false, 00:14:01.154 "abort": false, 00:14:01.154 "seek_hole": false, 00:14:01.154 "seek_data": false, 00:14:01.154 "copy": false, 00:14:01.154 "nvme_iov_md": false 00:14:01.154 }, 00:14:01.154 "memory_domains": [ 00:14:01.154 { 00:14:01.154 "dma_device_id": "system", 00:14:01.154 "dma_device_type": 1 00:14:01.154 }, 00:14:01.154 { 00:14:01.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.154 "dma_device_type": 2 00:14:01.154 }, 00:14:01.154 { 00:14:01.154 "dma_device_id": "system", 00:14:01.154 "dma_device_type": 1 00:14:01.154 }, 00:14:01.154 { 00:14:01.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.154 "dma_device_type": 2 00:14:01.154 } 00:14:01.154 ], 00:14:01.154 "driver_specific": { 00:14:01.154 "raid": { 00:14:01.154 "uuid": "c199714c-e43f-4335-9161-f54377867cd9", 00:14:01.154 "strip_size_kb": 64, 00:14:01.154 "state": "online", 00:14:01.154 "raid_level": "raid0", 00:14:01.154 "superblock": true, 00:14:01.154 "num_base_bdevs": 2, 00:14:01.154 "num_base_bdevs_discovered": 2, 00:14:01.154 "num_base_bdevs_operational": 2, 00:14:01.154 "base_bdevs_list": [ 00:14:01.154 { 00:14:01.154 "name": "pt1", 00:14:01.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.154 "is_configured": true, 00:14:01.154 "data_offset": 2048, 00:14:01.154 "data_size": 63488 00:14:01.154 }, 00:14:01.154 { 00:14:01.154 "name": "pt2", 00:14:01.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.154 "is_configured": true, 00:14:01.154 "data_offset": 2048, 00:14:01.154 "data_size": 63488 00:14:01.154 } 00:14:01.154 ] 00:14:01.154 } 00:14:01.154 } 00:14:01.154 }' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:01.154 pt2' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.154 [2024-12-09 22:55:16.956374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c199714c-e43f-4335-9161-f54377867cd9 '!=' c199714c-e43f-4335-9161-f54377867cd9 ']' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61630 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61630 ']' 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61630 00:14:01.154 22:55:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:01.154 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.154 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61630 00:14:01.413 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.413 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.413 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61630' 00:14:01.413 killing process with pid 61630 00:14:01.413 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61630 00:14:01.413 [2024-12-09 22:55:17.041557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.413 22:55:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61630 00:14:01.413 [2024-12-09 22:55:17.041768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.413 [2024-12-09 22:55:17.041837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.413 [2024-12-09 22:55:17.041851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:01.413 [2024-12-09 22:55:17.255368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.791 22:55:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:02.791 00:14:02.791 real 0m4.743s 00:14:02.791 user 0m6.542s 00:14:02.791 sys 0m0.942s 00:14:02.792 22:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.792 22:55:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 ************************************ 00:14:02.792 END TEST raid_superblock_test 00:14:02.792 ************************************ 00:14:02.792 22:55:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:02.792 22:55:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:02.792 22:55:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.792 22:55:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 ************************************ 00:14:02.792 START TEST raid_read_error_test 00:14:02.792 ************************************ 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nzzr96SsMb 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61841 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61841 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61841 ']' 00:14:02.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.792 22:55:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 [2024-12-09 22:55:18.600355] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:02.792 [2024-12-09 22:55:18.600514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61841 ] 00:14:03.050 [2024-12-09 22:55:18.777735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.050 [2024-12-09 22:55:18.896510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.309 [2024-12-09 22:55:19.088471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.309 [2024-12-09 22:55:19.088534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 BaseBdev1_malloc 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 true 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 [2024-12-09 22:55:19.515410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:03.877 [2024-12-09 22:55:19.515609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.877 [2024-12-09 22:55:19.515652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:03.877 [2024-12-09 22:55:19.515684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.877 [2024-12-09 22:55:19.518005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.877 [2024-12-09 22:55:19.518117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:03.877 BaseBdev1 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 BaseBdev2_malloc 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 true 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 [2024-12-09 22:55:19.584414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:03.877 [2024-12-09 22:55:19.584519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.877 [2024-12-09 22:55:19.584539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:03.877 [2024-12-09 22:55:19.584551] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.877 [2024-12-09 22:55:19.586904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.877 [2024-12-09 22:55:19.587036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:03.877 BaseBdev2 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.877 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.877 [2024-12-09 22:55:19.596485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.877 [2024-12-09 22:55:19.598671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.877 [2024-12-09 22:55:19.598900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:03.877 [2024-12-09 22:55:19.598920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.877 [2024-12-09 22:55:19.599215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:03.877 [2024-12-09 22:55:19.599414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:03.877 [2024-12-09 22:55:19.599427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:03.877 [2024-12-09 22:55:19.599623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.878 "name": "raid_bdev1", 00:14:03.878 "uuid": "097b7f99-7950-49b8-ab61-28fefb312bdf", 00:14:03.878 "strip_size_kb": 64, 00:14:03.878 "state": "online", 00:14:03.878 "raid_level": "raid0", 00:14:03.878 "superblock": true, 00:14:03.878 "num_base_bdevs": 2, 00:14:03.878 "num_base_bdevs_discovered": 2, 00:14:03.878 "num_base_bdevs_operational": 2, 00:14:03.878 "base_bdevs_list": [ 00:14:03.878 { 00:14:03.878 "name": "BaseBdev1", 00:14:03.878 "uuid": "1a5f9aec-cdc6-5d31-a7c2-b8fe2d33b143", 00:14:03.878 "is_configured": true, 00:14:03.878 "data_offset": 2048, 00:14:03.878 "data_size": 63488 00:14:03.878 }, 00:14:03.878 { 00:14:03.878 "name": "BaseBdev2", 00:14:03.878 "uuid": "3a34183a-1e76-5102-9451-810ad67329f4", 00:14:03.878 "is_configured": true, 00:14:03.878 "data_offset": 2048, 00:14:03.878 "data_size": 63488 00:14:03.878 } 00:14:03.878 ] 00:14:03.878 }' 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.878 22:55:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.445 22:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:04.445 22:55:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:04.445 [2024-12-09 22:55:20.136865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.493 "name": "raid_bdev1", 00:14:05.493 "uuid": "097b7f99-7950-49b8-ab61-28fefb312bdf", 00:14:05.493 "strip_size_kb": 64, 00:14:05.493 "state": "online", 00:14:05.493 "raid_level": "raid0", 00:14:05.493 "superblock": true, 00:14:05.493 "num_base_bdevs": 2, 00:14:05.493 "num_base_bdevs_discovered": 2, 00:14:05.493 "num_base_bdevs_operational": 2, 00:14:05.493 "base_bdevs_list": [ 00:14:05.493 { 00:14:05.493 "name": "BaseBdev1", 00:14:05.493 "uuid": "1a5f9aec-cdc6-5d31-a7c2-b8fe2d33b143", 00:14:05.493 "is_configured": true, 00:14:05.493 "data_offset": 2048, 00:14:05.493 "data_size": 63488 00:14:05.493 }, 00:14:05.493 { 00:14:05.493 "name": "BaseBdev2", 00:14:05.493 "uuid": "3a34183a-1e76-5102-9451-810ad67329f4", 00:14:05.493 "is_configured": true, 00:14:05.493 "data_offset": 2048, 00:14:05.493 "data_size": 63488 00:14:05.493 } 00:14:05.493 ] 00:14:05.493 }' 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.493 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 [2024-12-09 22:55:21.537407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.752 [2024-12-09 22:55:21.537578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.752 [2024-12-09 22:55:21.540769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.752 [2024-12-09 22:55:21.540863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.752 [2024-12-09 22:55:21.540921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.752 [2024-12-09 22:55:21.540975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:05.752 { 00:14:05.752 "results": [ 00:14:05.752 { 00:14:05.752 "job": "raid_bdev1", 00:14:05.752 "core_mask": "0x1", 00:14:05.752 "workload": "randrw", 00:14:05.752 "percentage": 50, 00:14:05.752 "status": "finished", 00:14:05.752 "queue_depth": 1, 00:14:05.752 "io_size": 131072, 00:14:05.752 "runtime": 1.401652, 00:14:05.752 "iops": 15156.401160915833, 00:14:05.752 "mibps": 1894.5501451144792, 00:14:05.752 "io_failed": 1, 00:14:05.752 "io_timeout": 0, 00:14:05.752 "avg_latency_us": 91.29214715818055, 00:14:05.752 "min_latency_us": 27.165065502183406, 00:14:05.752 "max_latency_us": 1631.2454148471616 00:14:05.752 } 00:14:05.752 ], 00:14:05.752 "core_count": 1 00:14:05.752 } 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61841 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61841 ']' 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61841 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61841 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61841' 00:14:05.752 killing process with pid 61841 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61841 00:14:05.752 [2024-12-09 22:55:21.593726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.752 22:55:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61841 00:14:06.011 [2024-12-09 22:55:21.748520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.391 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nzzr96SsMb 00:14:07.391 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:07.392 00:14:07.392 real 0m4.549s 00:14:07.392 user 0m5.425s 00:14:07.392 sys 0m0.594s 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.392 ************************************ 00:14:07.392 END TEST raid_read_error_test 00:14:07.392 ************************************ 00:14:07.392 22:55:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.392 22:55:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:07.392 22:55:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:07.392 22:55:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.392 22:55:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.392 ************************************ 00:14:07.392 START TEST raid_write_error_test 00:14:07.392 ************************************ 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bn1dcXWSp2 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61987 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61987 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61987 ']' 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.392 22:55:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.392 [2024-12-09 22:55:23.217626] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:07.392 [2024-12-09 22:55:23.217737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61987 ] 00:14:07.651 [2024-12-09 22:55:23.392769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.910 [2024-12-09 22:55:23.520997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.910 [2024-12-09 22:55:23.742127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.910 [2024-12-09 22:55:23.742266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.479 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 BaseBdev1_malloc 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 true 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 [2024-12-09 22:55:24.220068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:08.480 [2024-12-09 22:55:24.220149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.480 [2024-12-09 22:55:24.220176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:08.480 [2024-12-09 22:55:24.220188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.480 [2024-12-09 22:55:24.222843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.480 [2024-12-09 22:55:24.222894] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:08.480 BaseBdev1 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 BaseBdev2_malloc 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 true 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 [2024-12-09 22:55:24.289243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:08.480 [2024-12-09 22:55:24.289328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.480 [2024-12-09 22:55:24.289353] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:08.480 [2024-12-09 22:55:24.289366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.480 [2024-12-09 22:55:24.291920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.480 [2024-12-09 22:55:24.291967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:08.480 BaseBdev2 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 [2024-12-09 22:55:24.301295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.480 [2024-12-09 22:55:24.303430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.480 [2024-12-09 22:55:24.303693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:08.480 [2024-12-09 22:55:24.303723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:08.480 [2024-12-09 22:55:24.304001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:08.480 [2024-12-09 22:55:24.304196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:08.480 [2024-12-09 22:55:24.304210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:08.480 [2024-12-09 22:55:24.304395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.480 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.740 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.740 "name": "raid_bdev1", 00:14:08.740 "uuid": "7c1e8b19-09f1-46cf-ac19-8bb2207c0af8", 00:14:08.740 "strip_size_kb": 64, 00:14:08.740 "state": "online", 00:14:08.740 "raid_level": "raid0", 00:14:08.740 "superblock": true, 00:14:08.740 "num_base_bdevs": 2, 00:14:08.740 "num_base_bdevs_discovered": 2, 00:14:08.740 "num_base_bdevs_operational": 2, 00:14:08.740 "base_bdevs_list": [ 00:14:08.740 { 00:14:08.740 "name": "BaseBdev1", 00:14:08.740 "uuid": "d85e79d7-a9e9-5915-ba23-ee87a588993d", 00:14:08.740 "is_configured": true, 00:14:08.740 "data_offset": 2048, 00:14:08.740 "data_size": 63488 00:14:08.740 }, 00:14:08.740 { 00:14:08.740 "name": "BaseBdev2", 00:14:08.740 "uuid": "85477de6-93e5-59cd-94e8-c0a8954513da", 00:14:08.740 "is_configured": true, 00:14:08.740 "data_offset": 2048, 00:14:08.740 "data_size": 63488 00:14:08.740 } 00:14:08.740 ] 00:14:08.740 }' 00:14:08.740 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.740 22:55:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.999 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:08.999 22:55:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:09.000 [2024-12-09 22:55:24.853768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.956 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.216 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.217 "name": "raid_bdev1", 00:14:10.217 "uuid": "7c1e8b19-09f1-46cf-ac19-8bb2207c0af8", 00:14:10.217 "strip_size_kb": 64, 00:14:10.217 "state": "online", 00:14:10.217 "raid_level": "raid0", 00:14:10.217 "superblock": true, 00:14:10.217 "num_base_bdevs": 2, 00:14:10.217 "num_base_bdevs_discovered": 2, 00:14:10.217 "num_base_bdevs_operational": 2, 00:14:10.217 "base_bdevs_list": [ 00:14:10.217 { 00:14:10.217 "name": "BaseBdev1", 00:14:10.217 "uuid": "d85e79d7-a9e9-5915-ba23-ee87a588993d", 00:14:10.217 "is_configured": true, 00:14:10.217 "data_offset": 2048, 00:14:10.217 "data_size": 63488 00:14:10.217 }, 00:14:10.217 { 00:14:10.217 "name": "BaseBdev2", 00:14:10.217 "uuid": "85477de6-93e5-59cd-94e8-c0a8954513da", 00:14:10.217 "is_configured": true, 00:14:10.217 "data_offset": 2048, 00:14:10.217 "data_size": 63488 00:14:10.217 } 00:14:10.217 ] 00:14:10.217 }' 00:14:10.217 22:55:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.217 22:55:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.505 [2024-12-09 22:55:26.214808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.505 [2024-12-09 22:55:26.214959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.505 [2024-12-09 22:55:26.218111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.505 [2024-12-09 22:55:26.218210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.505 [2024-12-09 22:55:26.218271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.505 [2024-12-09 22:55:26.218324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:10.505 { 00:14:10.505 "results": [ 00:14:10.505 { 00:14:10.505 "job": "raid_bdev1", 00:14:10.505 "core_mask": "0x1", 00:14:10.505 "workload": "randrw", 00:14:10.505 "percentage": 50, 00:14:10.505 "status": "finished", 00:14:10.505 "queue_depth": 1, 00:14:10.505 "io_size": 131072, 00:14:10.505 "runtime": 1.362084, 00:14:10.505 "iops": 14645.939604312216, 00:14:10.505 "mibps": 1830.742450539027, 00:14:10.505 "io_failed": 1, 00:14:10.505 "io_timeout": 0, 00:14:10.505 "avg_latency_us": 94.4542414551663, 00:14:10.505 "min_latency_us": 27.053275109170304, 00:14:10.505 "max_latency_us": 1667.0183406113538 00:14:10.505 } 00:14:10.505 ], 00:14:10.505 "core_count": 1 00:14:10.505 } 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61987 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61987 ']' 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61987 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61987 00:14:10.505 killing process with pid 61987 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61987' 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61987 00:14:10.505 [2024-12-09 22:55:26.259866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.505 22:55:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61987 00:14:10.763 [2024-12-09 22:55:26.400340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bn1dcXWSp2 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:12.140 00:14:12.140 real 0m4.551s 00:14:12.140 user 0m5.481s 00:14:12.140 sys 0m0.568s 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.140 ************************************ 00:14:12.140 22:55:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.140 END TEST raid_write_error_test 00:14:12.140 ************************************ 00:14:12.140 22:55:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:12.140 22:55:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:12.140 22:55:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:12.140 22:55:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.140 22:55:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.140 ************************************ 00:14:12.140 START TEST raid_state_function_test 00:14:12.140 ************************************ 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62125 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62125' 00:14:12.140 Process raid pid: 62125 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62125 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62125 ']' 00:14:12.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.140 22:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.140 [2024-12-09 22:55:27.848954] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:12.140 [2024-12-09 22:55:27.849219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.399 [2024-12-09 22:55:28.011925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.399 [2024-12-09 22:55:28.134734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.658 [2024-12-09 22:55:28.354237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.658 [2024-12-09 22:55:28.354389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.917 [2024-12-09 22:55:28.707192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.917 [2024-12-09 22:55:28.707272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.917 [2024-12-09 22:55:28.707283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.917 [2024-12-09 22:55:28.707293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.917 "name": "Existed_Raid", 00:14:12.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.917 "strip_size_kb": 64, 00:14:12.917 "state": "configuring", 00:14:12.917 "raid_level": "concat", 00:14:12.917 "superblock": false, 00:14:12.917 "num_base_bdevs": 2, 00:14:12.917 "num_base_bdevs_discovered": 0, 00:14:12.917 "num_base_bdevs_operational": 2, 00:14:12.917 "base_bdevs_list": [ 00:14:12.917 { 00:14:12.917 "name": "BaseBdev1", 00:14:12.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.917 "is_configured": false, 00:14:12.917 "data_offset": 0, 00:14:12.917 "data_size": 0 00:14:12.917 }, 00:14:12.917 { 00:14:12.917 "name": "BaseBdev2", 00:14:12.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.917 "is_configured": false, 00:14:12.917 "data_offset": 0, 00:14:12.917 "data_size": 0 00:14:12.917 } 00:14:12.917 ] 00:14:12.917 }' 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.917 22:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.483 [2024-12-09 22:55:29.166427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.483 [2024-12-09 22:55:29.166575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.483 [2024-12-09 22:55:29.178395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.483 [2024-12-09 22:55:29.178519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.483 [2024-12-09 22:55:29.178578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.483 [2024-12-09 22:55:29.178614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.483 [2024-12-09 22:55:29.236777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.483 BaseBdev1 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.483 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.484 [ 00:14:13.484 { 00:14:13.484 "name": "BaseBdev1", 00:14:13.484 "aliases": [ 00:14:13.484 "159b3d13-8bda-4e05-9016-eea08f1faaac" 00:14:13.484 ], 00:14:13.484 "product_name": "Malloc disk", 00:14:13.484 "block_size": 512, 00:14:13.484 "num_blocks": 65536, 00:14:13.484 "uuid": "159b3d13-8bda-4e05-9016-eea08f1faaac", 00:14:13.484 "assigned_rate_limits": { 00:14:13.484 "rw_ios_per_sec": 0, 00:14:13.484 "rw_mbytes_per_sec": 0, 00:14:13.484 "r_mbytes_per_sec": 0, 00:14:13.484 "w_mbytes_per_sec": 0 00:14:13.484 }, 00:14:13.484 "claimed": true, 00:14:13.484 "claim_type": "exclusive_write", 00:14:13.484 "zoned": false, 00:14:13.484 "supported_io_types": { 00:14:13.484 "read": true, 00:14:13.484 "write": true, 00:14:13.484 "unmap": true, 00:14:13.484 "flush": true, 00:14:13.484 "reset": true, 00:14:13.484 "nvme_admin": false, 00:14:13.484 "nvme_io": false, 00:14:13.484 "nvme_io_md": false, 00:14:13.484 "write_zeroes": true, 00:14:13.484 "zcopy": true, 00:14:13.484 "get_zone_info": false, 00:14:13.484 "zone_management": false, 00:14:13.484 "zone_append": false, 00:14:13.484 "compare": false, 00:14:13.484 "compare_and_write": false, 00:14:13.484 "abort": true, 00:14:13.484 "seek_hole": false, 00:14:13.484 "seek_data": false, 00:14:13.484 "copy": true, 00:14:13.484 "nvme_iov_md": false 00:14:13.484 }, 00:14:13.484 "memory_domains": [ 00:14:13.484 { 00:14:13.484 "dma_device_id": "system", 00:14:13.484 "dma_device_type": 1 00:14:13.484 }, 00:14:13.484 { 00:14:13.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.484 "dma_device_type": 2 00:14:13.484 } 00:14:13.484 ], 00:14:13.484 "driver_specific": {} 00:14:13.484 } 00:14:13.484 ] 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.484 "name": "Existed_Raid", 00:14:13.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.484 "strip_size_kb": 64, 00:14:13.484 "state": "configuring", 00:14:13.484 "raid_level": "concat", 00:14:13.484 "superblock": false, 00:14:13.484 "num_base_bdevs": 2, 00:14:13.484 "num_base_bdevs_discovered": 1, 00:14:13.484 "num_base_bdevs_operational": 2, 00:14:13.484 "base_bdevs_list": [ 00:14:13.484 { 00:14:13.484 "name": "BaseBdev1", 00:14:13.484 "uuid": "159b3d13-8bda-4e05-9016-eea08f1faaac", 00:14:13.484 "is_configured": true, 00:14:13.484 "data_offset": 0, 00:14:13.484 "data_size": 65536 00:14:13.484 }, 00:14:13.484 { 00:14:13.484 "name": "BaseBdev2", 00:14:13.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.484 "is_configured": false, 00:14:13.484 "data_offset": 0, 00:14:13.484 "data_size": 0 00:14:13.484 } 00:14:13.484 ] 00:14:13.484 }' 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.484 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 [2024-12-09 22:55:29.768417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.050 [2024-12-09 22:55:29.768611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 [2024-12-09 22:55:29.780485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.050 [2024-12-09 22:55:29.783003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.050 [2024-12-09 22:55:29.783062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.050 "name": "Existed_Raid", 00:14:14.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.050 "strip_size_kb": 64, 00:14:14.050 "state": "configuring", 00:14:14.050 "raid_level": "concat", 00:14:14.050 "superblock": false, 00:14:14.050 "num_base_bdevs": 2, 00:14:14.050 "num_base_bdevs_discovered": 1, 00:14:14.050 "num_base_bdevs_operational": 2, 00:14:14.050 "base_bdevs_list": [ 00:14:14.050 { 00:14:14.050 "name": "BaseBdev1", 00:14:14.050 "uuid": "159b3d13-8bda-4e05-9016-eea08f1faaac", 00:14:14.050 "is_configured": true, 00:14:14.050 "data_offset": 0, 00:14:14.050 "data_size": 65536 00:14:14.050 }, 00:14:14.050 { 00:14:14.050 "name": "BaseBdev2", 00:14:14.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.050 "is_configured": false, 00:14:14.050 "data_offset": 0, 00:14:14.050 "data_size": 0 00:14:14.050 } 00:14:14.050 ] 00:14:14.050 }' 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.050 22:55:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.618 [2024-12-09 22:55:30.263442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.618 [2024-12-09 22:55:30.263617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:14.618 [2024-12-09 22:55:30.263650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:14.618 [2024-12-09 22:55:30.263997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:14.618 [2024-12-09 22:55:30.264278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:14.618 [2024-12-09 22:55:30.264328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:14.618 [2024-12-09 22:55:30.264724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.618 BaseBdev2 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.618 [ 00:14:14.618 { 00:14:14.618 "name": "BaseBdev2", 00:14:14.618 "aliases": [ 00:14:14.618 "f60a6f7d-5764-4ccb-8fb7-0181dad3e558" 00:14:14.618 ], 00:14:14.618 "product_name": "Malloc disk", 00:14:14.618 "block_size": 512, 00:14:14.618 "num_blocks": 65536, 00:14:14.618 "uuid": "f60a6f7d-5764-4ccb-8fb7-0181dad3e558", 00:14:14.618 "assigned_rate_limits": { 00:14:14.618 "rw_ios_per_sec": 0, 00:14:14.618 "rw_mbytes_per_sec": 0, 00:14:14.618 "r_mbytes_per_sec": 0, 00:14:14.618 "w_mbytes_per_sec": 0 00:14:14.618 }, 00:14:14.618 "claimed": true, 00:14:14.618 "claim_type": "exclusive_write", 00:14:14.618 "zoned": false, 00:14:14.618 "supported_io_types": { 00:14:14.618 "read": true, 00:14:14.618 "write": true, 00:14:14.618 "unmap": true, 00:14:14.618 "flush": true, 00:14:14.618 "reset": true, 00:14:14.618 "nvme_admin": false, 00:14:14.618 "nvme_io": false, 00:14:14.618 "nvme_io_md": false, 00:14:14.618 "write_zeroes": true, 00:14:14.618 "zcopy": true, 00:14:14.618 "get_zone_info": false, 00:14:14.618 "zone_management": false, 00:14:14.618 "zone_append": false, 00:14:14.618 "compare": false, 00:14:14.618 "compare_and_write": false, 00:14:14.618 "abort": true, 00:14:14.618 "seek_hole": false, 00:14:14.618 "seek_data": false, 00:14:14.618 "copy": true, 00:14:14.618 "nvme_iov_md": false 00:14:14.618 }, 00:14:14.618 "memory_domains": [ 00:14:14.618 { 00:14:14.618 "dma_device_id": "system", 00:14:14.618 "dma_device_type": 1 00:14:14.618 }, 00:14:14.618 { 00:14:14.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.618 "dma_device_type": 2 00:14:14.618 } 00:14:14.618 ], 00:14:14.618 "driver_specific": {} 00:14:14.618 } 00:14:14.618 ] 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.618 "name": "Existed_Raid", 00:14:14.618 "uuid": "284496b0-aa42-48b6-af75-3b31280d5772", 00:14:14.618 "strip_size_kb": 64, 00:14:14.618 "state": "online", 00:14:14.618 "raid_level": "concat", 00:14:14.618 "superblock": false, 00:14:14.618 "num_base_bdevs": 2, 00:14:14.618 "num_base_bdevs_discovered": 2, 00:14:14.618 "num_base_bdevs_operational": 2, 00:14:14.618 "base_bdevs_list": [ 00:14:14.618 { 00:14:14.618 "name": "BaseBdev1", 00:14:14.618 "uuid": "159b3d13-8bda-4e05-9016-eea08f1faaac", 00:14:14.618 "is_configured": true, 00:14:14.618 "data_offset": 0, 00:14:14.618 "data_size": 65536 00:14:14.618 }, 00:14:14.618 { 00:14:14.618 "name": "BaseBdev2", 00:14:14.618 "uuid": "f60a6f7d-5764-4ccb-8fb7-0181dad3e558", 00:14:14.618 "is_configured": true, 00:14:14.618 "data_offset": 0, 00:14:14.618 "data_size": 65536 00:14:14.618 } 00:14:14.618 ] 00:14:14.618 }' 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.618 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.227 [2024-12-09 22:55:30.782973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:15.227 "name": "Existed_Raid", 00:14:15.227 "aliases": [ 00:14:15.227 "284496b0-aa42-48b6-af75-3b31280d5772" 00:14:15.227 ], 00:14:15.227 "product_name": "Raid Volume", 00:14:15.227 "block_size": 512, 00:14:15.227 "num_blocks": 131072, 00:14:15.227 "uuid": "284496b0-aa42-48b6-af75-3b31280d5772", 00:14:15.227 "assigned_rate_limits": { 00:14:15.227 "rw_ios_per_sec": 0, 00:14:15.227 "rw_mbytes_per_sec": 0, 00:14:15.227 "r_mbytes_per_sec": 0, 00:14:15.227 "w_mbytes_per_sec": 0 00:14:15.227 }, 00:14:15.227 "claimed": false, 00:14:15.227 "zoned": false, 00:14:15.227 "supported_io_types": { 00:14:15.227 "read": true, 00:14:15.227 "write": true, 00:14:15.227 "unmap": true, 00:14:15.227 "flush": true, 00:14:15.227 "reset": true, 00:14:15.227 "nvme_admin": false, 00:14:15.227 "nvme_io": false, 00:14:15.227 "nvme_io_md": false, 00:14:15.227 "write_zeroes": true, 00:14:15.227 "zcopy": false, 00:14:15.227 "get_zone_info": false, 00:14:15.227 "zone_management": false, 00:14:15.227 "zone_append": false, 00:14:15.227 "compare": false, 00:14:15.227 "compare_and_write": false, 00:14:15.227 "abort": false, 00:14:15.227 "seek_hole": false, 00:14:15.227 "seek_data": false, 00:14:15.227 "copy": false, 00:14:15.227 "nvme_iov_md": false 00:14:15.227 }, 00:14:15.227 "memory_domains": [ 00:14:15.227 { 00:14:15.227 "dma_device_id": "system", 00:14:15.227 "dma_device_type": 1 00:14:15.227 }, 00:14:15.227 { 00:14:15.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.227 "dma_device_type": 2 00:14:15.227 }, 00:14:15.227 { 00:14:15.227 "dma_device_id": "system", 00:14:15.227 "dma_device_type": 1 00:14:15.227 }, 00:14:15.227 { 00:14:15.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.227 "dma_device_type": 2 00:14:15.227 } 00:14:15.227 ], 00:14:15.227 "driver_specific": { 00:14:15.227 "raid": { 00:14:15.227 "uuid": "284496b0-aa42-48b6-af75-3b31280d5772", 00:14:15.227 "strip_size_kb": 64, 00:14:15.227 "state": "online", 00:14:15.227 "raid_level": "concat", 00:14:15.227 "superblock": false, 00:14:15.227 "num_base_bdevs": 2, 00:14:15.227 "num_base_bdevs_discovered": 2, 00:14:15.227 "num_base_bdevs_operational": 2, 00:14:15.227 "base_bdevs_list": [ 00:14:15.227 { 00:14:15.227 "name": "BaseBdev1", 00:14:15.227 "uuid": "159b3d13-8bda-4e05-9016-eea08f1faaac", 00:14:15.227 "is_configured": true, 00:14:15.227 "data_offset": 0, 00:14:15.227 "data_size": 65536 00:14:15.227 }, 00:14:15.227 { 00:14:15.227 "name": "BaseBdev2", 00:14:15.227 "uuid": "f60a6f7d-5764-4ccb-8fb7-0181dad3e558", 00:14:15.227 "is_configured": true, 00:14:15.227 "data_offset": 0, 00:14:15.227 "data_size": 65536 00:14:15.227 } 00:14:15.227 ] 00:14:15.227 } 00:14:15.227 } 00:14:15.227 }' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:15.227 BaseBdev2' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.227 22:55:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.227 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.227 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.227 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:15.227 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.227 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.227 [2024-12-09 22:55:31.026305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.227 [2024-12-09 22:55:31.026516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.227 [2024-12-09 22:55:31.026585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.488 "name": "Existed_Raid", 00:14:15.488 "uuid": "284496b0-aa42-48b6-af75-3b31280d5772", 00:14:15.488 "strip_size_kb": 64, 00:14:15.488 "state": "offline", 00:14:15.488 "raid_level": "concat", 00:14:15.488 "superblock": false, 00:14:15.488 "num_base_bdevs": 2, 00:14:15.488 "num_base_bdevs_discovered": 1, 00:14:15.488 "num_base_bdevs_operational": 1, 00:14:15.488 "base_bdevs_list": [ 00:14:15.488 { 00:14:15.488 "name": null, 00:14:15.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.488 "is_configured": false, 00:14:15.488 "data_offset": 0, 00:14:15.488 "data_size": 65536 00:14:15.488 }, 00:14:15.488 { 00:14:15.488 "name": "BaseBdev2", 00:14:15.488 "uuid": "f60a6f7d-5764-4ccb-8fb7-0181dad3e558", 00:14:15.488 "is_configured": true, 00:14:15.488 "data_offset": 0, 00:14:15.488 "data_size": 65536 00:14:15.488 } 00:14:15.488 ] 00:14:15.488 }' 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.488 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.056 [2024-12-09 22:55:31.677237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.056 [2024-12-09 22:55:31.677313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62125 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62125 ']' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62125 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62125 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62125' 00:14:16.056 killing process with pid 62125 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62125 00:14:16.056 [2024-12-09 22:55:31.886521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.056 22:55:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62125 00:14:16.056 [2024-12-09 22:55:31.905853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.434 ************************************ 00:14:17.434 END TEST raid_state_function_test 00:14:17.434 ************************************ 00:14:17.434 22:55:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:17.434 00:14:17.434 real 0m5.490s 00:14:17.434 user 0m7.797s 00:14:17.434 sys 0m0.905s 00:14:17.434 22:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.434 22:55:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.434 22:55:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:17.434 22:55:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:17.434 22:55:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.434 22:55:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.692 ************************************ 00:14:17.692 START TEST raid_state_function_test_sb 00:14:17.692 ************************************ 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62384 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62384' 00:14:17.692 Process raid pid: 62384 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62384 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62384 ']' 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.692 22:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.692 [2024-12-09 22:55:33.424176] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:17.692 [2024-12-09 22:55:33.424332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.950 [2024-12-09 22:55:33.607849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.950 [2024-12-09 22:55:33.761112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.208 [2024-12-09 22:55:34.010700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.208 [2024-12-09 22:55:34.010889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.466 [2024-12-09 22:55:34.284834] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.466 [2024-12-09 22:55:34.284924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.466 [2024-12-09 22:55:34.284943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.466 [2024-12-09 22:55:34.284955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.466 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.723 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.723 "name": "Existed_Raid", 00:14:18.723 "uuid": "c30850c6-c5b4-44b4-96a3-a639f2660c5e", 00:14:18.723 "strip_size_kb": 64, 00:14:18.723 "state": "configuring", 00:14:18.723 "raid_level": "concat", 00:14:18.723 "superblock": true, 00:14:18.723 "num_base_bdevs": 2, 00:14:18.723 "num_base_bdevs_discovered": 0, 00:14:18.723 "num_base_bdevs_operational": 2, 00:14:18.723 "base_bdevs_list": [ 00:14:18.723 { 00:14:18.723 "name": "BaseBdev1", 00:14:18.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.723 "is_configured": false, 00:14:18.723 "data_offset": 0, 00:14:18.723 "data_size": 0 00:14:18.723 }, 00:14:18.723 { 00:14:18.723 "name": "BaseBdev2", 00:14:18.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.723 "is_configured": false, 00:14:18.723 "data_offset": 0, 00:14:18.723 "data_size": 0 00:14:18.723 } 00:14:18.723 ] 00:14:18.723 }' 00:14:18.723 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.723 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 [2024-12-09 22:55:34.775954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.981 [2024-12-09 22:55:34.776062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.981 [2024-12-09 22:55:34.787934] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.981 [2024-12-09 22:55:34.788029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.981 [2024-12-09 22:55:34.788059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.981 [2024-12-09 22:55:34.788088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.981 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.240 [2024-12-09 22:55:34.847124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.240 BaseBdev1 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.240 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.240 [ 00:14:19.240 { 00:14:19.240 "name": "BaseBdev1", 00:14:19.240 "aliases": [ 00:14:19.240 "1735f0ef-d1f6-4a4f-b7aa-4991d4104de2" 00:14:19.240 ], 00:14:19.240 "product_name": "Malloc disk", 00:14:19.240 "block_size": 512, 00:14:19.240 "num_blocks": 65536, 00:14:19.240 "uuid": "1735f0ef-d1f6-4a4f-b7aa-4991d4104de2", 00:14:19.240 "assigned_rate_limits": { 00:14:19.240 "rw_ios_per_sec": 0, 00:14:19.240 "rw_mbytes_per_sec": 0, 00:14:19.240 "r_mbytes_per_sec": 0, 00:14:19.240 "w_mbytes_per_sec": 0 00:14:19.240 }, 00:14:19.240 "claimed": true, 00:14:19.240 "claim_type": "exclusive_write", 00:14:19.240 "zoned": false, 00:14:19.240 "supported_io_types": { 00:14:19.240 "read": true, 00:14:19.240 "write": true, 00:14:19.240 "unmap": true, 00:14:19.240 "flush": true, 00:14:19.240 "reset": true, 00:14:19.240 "nvme_admin": false, 00:14:19.240 "nvme_io": false, 00:14:19.240 "nvme_io_md": false, 00:14:19.240 "write_zeroes": true, 00:14:19.240 "zcopy": true, 00:14:19.240 "get_zone_info": false, 00:14:19.240 "zone_management": false, 00:14:19.240 "zone_append": false, 00:14:19.240 "compare": false, 00:14:19.240 "compare_and_write": false, 00:14:19.241 "abort": true, 00:14:19.241 "seek_hole": false, 00:14:19.241 "seek_data": false, 00:14:19.241 "copy": true, 00:14:19.241 "nvme_iov_md": false 00:14:19.241 }, 00:14:19.241 "memory_domains": [ 00:14:19.241 { 00:14:19.241 "dma_device_id": "system", 00:14:19.241 "dma_device_type": 1 00:14:19.241 }, 00:14:19.241 { 00:14:19.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.241 "dma_device_type": 2 00:14:19.241 } 00:14:19.241 ], 00:14:19.241 "driver_specific": {} 00:14:19.241 } 00:14:19.241 ] 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.241 "name": "Existed_Raid", 00:14:19.241 "uuid": "1ee5a638-a55a-4094-8901-116c2f30cc3e", 00:14:19.241 "strip_size_kb": 64, 00:14:19.241 "state": "configuring", 00:14:19.241 "raid_level": "concat", 00:14:19.241 "superblock": true, 00:14:19.241 "num_base_bdevs": 2, 00:14:19.241 "num_base_bdevs_discovered": 1, 00:14:19.241 "num_base_bdevs_operational": 2, 00:14:19.241 "base_bdevs_list": [ 00:14:19.241 { 00:14:19.241 "name": "BaseBdev1", 00:14:19.241 "uuid": "1735f0ef-d1f6-4a4f-b7aa-4991d4104de2", 00:14:19.241 "is_configured": true, 00:14:19.241 "data_offset": 2048, 00:14:19.241 "data_size": 63488 00:14:19.241 }, 00:14:19.241 { 00:14:19.241 "name": "BaseBdev2", 00:14:19.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.241 "is_configured": false, 00:14:19.241 "data_offset": 0, 00:14:19.241 "data_size": 0 00:14:19.241 } 00:14:19.241 ] 00:14:19.241 }' 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.241 22:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.499 [2024-12-09 22:55:35.282498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.499 [2024-12-09 22:55:35.282655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.499 [2024-12-09 22:55:35.294623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.499 [2024-12-09 22:55:35.297249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.499 [2024-12-09 22:55:35.297388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.499 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.760 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.760 "name": "Existed_Raid", 00:14:19.760 "uuid": "e7eeb980-5b1a-4629-a837-0c83708bb5e0", 00:14:19.760 "strip_size_kb": 64, 00:14:19.760 "state": "configuring", 00:14:19.760 "raid_level": "concat", 00:14:19.760 "superblock": true, 00:14:19.760 "num_base_bdevs": 2, 00:14:19.760 "num_base_bdevs_discovered": 1, 00:14:19.760 "num_base_bdevs_operational": 2, 00:14:19.760 "base_bdevs_list": [ 00:14:19.760 { 00:14:19.760 "name": "BaseBdev1", 00:14:19.760 "uuid": "1735f0ef-d1f6-4a4f-b7aa-4991d4104de2", 00:14:19.760 "is_configured": true, 00:14:19.760 "data_offset": 2048, 00:14:19.760 "data_size": 63488 00:14:19.760 }, 00:14:19.760 { 00:14:19.760 "name": "BaseBdev2", 00:14:19.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.760 "is_configured": false, 00:14:19.760 "data_offset": 0, 00:14:19.760 "data_size": 0 00:14:19.760 } 00:14:19.760 ] 00:14:19.760 }' 00:14:19.760 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.760 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 [2024-12-09 22:55:35.780868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.019 [2024-12-09 22:55:35.781338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:20.019 [2024-12-09 22:55:35.781363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:20.019 [2024-12-09 22:55:35.781722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:20.019 BaseBdev2 00:14:20.019 [2024-12-09 22:55:35.781945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:20.019 [2024-12-09 22:55:35.781971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:20.019 [2024-12-09 22:55:35.782152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 [ 00:14:20.019 { 00:14:20.019 "name": "BaseBdev2", 00:14:20.019 "aliases": [ 00:14:20.019 "ff628e10-0492-48fe-9a4a-8722de87c5f3" 00:14:20.019 ], 00:14:20.019 "product_name": "Malloc disk", 00:14:20.019 "block_size": 512, 00:14:20.019 "num_blocks": 65536, 00:14:20.019 "uuid": "ff628e10-0492-48fe-9a4a-8722de87c5f3", 00:14:20.019 "assigned_rate_limits": { 00:14:20.019 "rw_ios_per_sec": 0, 00:14:20.019 "rw_mbytes_per_sec": 0, 00:14:20.019 "r_mbytes_per_sec": 0, 00:14:20.019 "w_mbytes_per_sec": 0 00:14:20.019 }, 00:14:20.019 "claimed": true, 00:14:20.019 "claim_type": "exclusive_write", 00:14:20.019 "zoned": false, 00:14:20.019 "supported_io_types": { 00:14:20.019 "read": true, 00:14:20.019 "write": true, 00:14:20.019 "unmap": true, 00:14:20.019 "flush": true, 00:14:20.019 "reset": true, 00:14:20.019 "nvme_admin": false, 00:14:20.019 "nvme_io": false, 00:14:20.019 "nvme_io_md": false, 00:14:20.019 "write_zeroes": true, 00:14:20.019 "zcopy": true, 00:14:20.019 "get_zone_info": false, 00:14:20.019 "zone_management": false, 00:14:20.019 "zone_append": false, 00:14:20.019 "compare": false, 00:14:20.019 "compare_and_write": false, 00:14:20.019 "abort": true, 00:14:20.019 "seek_hole": false, 00:14:20.019 "seek_data": false, 00:14:20.019 "copy": true, 00:14:20.019 "nvme_iov_md": false 00:14:20.019 }, 00:14:20.019 "memory_domains": [ 00:14:20.019 { 00:14:20.019 "dma_device_id": "system", 00:14:20.019 "dma_device_type": 1 00:14:20.019 }, 00:14:20.019 { 00:14:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.019 "dma_device_type": 2 00:14:20.019 } 00:14:20.019 ], 00:14:20.019 "driver_specific": {} 00:14:20.019 } 00:14:20.019 ] 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.019 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.278 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.278 "name": "Existed_Raid", 00:14:20.278 "uuid": "e7eeb980-5b1a-4629-a837-0c83708bb5e0", 00:14:20.278 "strip_size_kb": 64, 00:14:20.278 "state": "online", 00:14:20.278 "raid_level": "concat", 00:14:20.278 "superblock": true, 00:14:20.278 "num_base_bdevs": 2, 00:14:20.278 "num_base_bdevs_discovered": 2, 00:14:20.278 "num_base_bdevs_operational": 2, 00:14:20.278 "base_bdevs_list": [ 00:14:20.278 { 00:14:20.278 "name": "BaseBdev1", 00:14:20.278 "uuid": "1735f0ef-d1f6-4a4f-b7aa-4991d4104de2", 00:14:20.278 "is_configured": true, 00:14:20.278 "data_offset": 2048, 00:14:20.278 "data_size": 63488 00:14:20.278 }, 00:14:20.278 { 00:14:20.278 "name": "BaseBdev2", 00:14:20.278 "uuid": "ff628e10-0492-48fe-9a4a-8722de87c5f3", 00:14:20.278 "is_configured": true, 00:14:20.278 "data_offset": 2048, 00:14:20.278 "data_size": 63488 00:14:20.278 } 00:14:20.278 ] 00:14:20.278 }' 00:14:20.278 22:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.278 22:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.628 [2024-12-09 22:55:36.276869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.628 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:20.628 "name": "Existed_Raid", 00:14:20.628 "aliases": [ 00:14:20.628 "e7eeb980-5b1a-4629-a837-0c83708bb5e0" 00:14:20.628 ], 00:14:20.628 "product_name": "Raid Volume", 00:14:20.628 "block_size": 512, 00:14:20.628 "num_blocks": 126976, 00:14:20.628 "uuid": "e7eeb980-5b1a-4629-a837-0c83708bb5e0", 00:14:20.628 "assigned_rate_limits": { 00:14:20.628 "rw_ios_per_sec": 0, 00:14:20.628 "rw_mbytes_per_sec": 0, 00:14:20.628 "r_mbytes_per_sec": 0, 00:14:20.628 "w_mbytes_per_sec": 0 00:14:20.628 }, 00:14:20.628 "claimed": false, 00:14:20.628 "zoned": false, 00:14:20.628 "supported_io_types": { 00:14:20.628 "read": true, 00:14:20.628 "write": true, 00:14:20.628 "unmap": true, 00:14:20.628 "flush": true, 00:14:20.628 "reset": true, 00:14:20.628 "nvme_admin": false, 00:14:20.628 "nvme_io": false, 00:14:20.628 "nvme_io_md": false, 00:14:20.628 "write_zeroes": true, 00:14:20.628 "zcopy": false, 00:14:20.628 "get_zone_info": false, 00:14:20.628 "zone_management": false, 00:14:20.628 "zone_append": false, 00:14:20.628 "compare": false, 00:14:20.628 "compare_and_write": false, 00:14:20.628 "abort": false, 00:14:20.628 "seek_hole": false, 00:14:20.628 "seek_data": false, 00:14:20.628 "copy": false, 00:14:20.628 "nvme_iov_md": false 00:14:20.628 }, 00:14:20.628 "memory_domains": [ 00:14:20.628 { 00:14:20.628 "dma_device_id": "system", 00:14:20.628 "dma_device_type": 1 00:14:20.628 }, 00:14:20.628 { 00:14:20.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.628 "dma_device_type": 2 00:14:20.628 }, 00:14:20.628 { 00:14:20.629 "dma_device_id": "system", 00:14:20.629 "dma_device_type": 1 00:14:20.629 }, 00:14:20.629 { 00:14:20.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.629 "dma_device_type": 2 00:14:20.629 } 00:14:20.629 ], 00:14:20.629 "driver_specific": { 00:14:20.629 "raid": { 00:14:20.629 "uuid": "e7eeb980-5b1a-4629-a837-0c83708bb5e0", 00:14:20.629 "strip_size_kb": 64, 00:14:20.629 "state": "online", 00:14:20.629 "raid_level": "concat", 00:14:20.629 "superblock": true, 00:14:20.629 "num_base_bdevs": 2, 00:14:20.629 "num_base_bdevs_discovered": 2, 00:14:20.629 "num_base_bdevs_operational": 2, 00:14:20.629 "base_bdevs_list": [ 00:14:20.629 { 00:14:20.629 "name": "BaseBdev1", 00:14:20.629 "uuid": "1735f0ef-d1f6-4a4f-b7aa-4991d4104de2", 00:14:20.629 "is_configured": true, 00:14:20.629 "data_offset": 2048, 00:14:20.629 "data_size": 63488 00:14:20.629 }, 00:14:20.629 { 00:14:20.629 "name": "BaseBdev2", 00:14:20.629 "uuid": "ff628e10-0492-48fe-9a4a-8722de87c5f3", 00:14:20.629 "is_configured": true, 00:14:20.629 "data_offset": 2048, 00:14:20.629 "data_size": 63488 00:14:20.629 } 00:14:20.629 ] 00:14:20.629 } 00:14:20.629 } 00:14:20.629 }' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:20.629 BaseBdev2' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.629 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.889 [2024-12-09 22:55:36.496185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.889 [2024-12-09 22:55:36.496230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.889 [2024-12-09 22:55:36.496300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.889 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.889 "name": "Existed_Raid", 00:14:20.889 "uuid": "e7eeb980-5b1a-4629-a837-0c83708bb5e0", 00:14:20.889 "strip_size_kb": 64, 00:14:20.889 "state": "offline", 00:14:20.889 "raid_level": "concat", 00:14:20.889 "superblock": true, 00:14:20.889 "num_base_bdevs": 2, 00:14:20.889 "num_base_bdevs_discovered": 1, 00:14:20.889 "num_base_bdevs_operational": 1, 00:14:20.889 "base_bdevs_list": [ 00:14:20.889 { 00:14:20.889 "name": null, 00:14:20.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.889 "is_configured": false, 00:14:20.889 "data_offset": 0, 00:14:20.889 "data_size": 63488 00:14:20.889 }, 00:14:20.889 { 00:14:20.889 "name": "BaseBdev2", 00:14:20.890 "uuid": "ff628e10-0492-48fe-9a4a-8722de87c5f3", 00:14:20.890 "is_configured": true, 00:14:20.890 "data_offset": 2048, 00:14:20.890 "data_size": 63488 00:14:20.890 } 00:14:20.890 ] 00:14:20.890 }' 00:14:20.890 22:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.890 22:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.457 [2024-12-09 22:55:37.083212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.457 [2024-12-09 22:55:37.083287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62384 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62384 ']' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62384 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62384 00:14:21.457 killing process with pid 62384 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62384' 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62384 00:14:21.457 22:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62384 00:14:21.457 [2024-12-09 22:55:37.289802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.457 [2024-12-09 22:55:37.310087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.832 22:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:22.832 00:14:22.832 real 0m5.373s 00:14:22.832 user 0m7.444s 00:14:22.832 sys 0m0.979s 00:14:22.832 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.832 22:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.832 ************************************ 00:14:22.832 END TEST raid_state_function_test_sb 00:14:22.832 ************************************ 00:14:23.092 22:55:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:23.092 22:55:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:23.092 22:55:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.092 22:55:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.092 ************************************ 00:14:23.092 START TEST raid_superblock_test 00:14:23.092 ************************************ 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:23.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62636 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62636 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62636 ']' 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.092 22:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:23.092 [2024-12-09 22:55:38.837825] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:23.092 [2024-12-09 22:55:38.838081] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62636 ] 00:14:23.352 [2024-12-09 22:55:39.022314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.352 [2024-12-09 22:55:39.172773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.653 [2024-12-09 22:55:39.435704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.653 [2024-12-09 22:55:39.435888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.914 malloc1 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.914 [2024-12-09 22:55:39.757220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.914 [2024-12-09 22:55:39.757365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.914 [2024-12-09 22:55:39.757414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.914 [2024-12-09 22:55:39.757455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.914 [2024-12-09 22:55:39.760039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.914 [2024-12-09 22:55:39.760117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.914 pt1 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.914 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.174 malloc2 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.174 [2024-12-09 22:55:39.822982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.174 [2024-12-09 22:55:39.823102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.174 [2024-12-09 22:55:39.823148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:24.174 [2024-12-09 22:55:39.823180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.174 [2024-12-09 22:55:39.825852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.174 [2024-12-09 22:55:39.825952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.174 pt2 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.174 [2024-12-09 22:55:39.835125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:24.174 [2024-12-09 22:55:39.837556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.174 [2024-12-09 22:55:39.837792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.174 [2024-12-09 22:55:39.837807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:24.174 [2024-12-09 22:55:39.838147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:24.174 [2024-12-09 22:55:39.838338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.174 [2024-12-09 22:55:39.838350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.174 [2024-12-09 22:55:39.838596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.174 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.174 "name": "raid_bdev1", 00:14:24.174 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:24.174 "strip_size_kb": 64, 00:14:24.174 "state": "online", 00:14:24.174 "raid_level": "concat", 00:14:24.174 "superblock": true, 00:14:24.174 "num_base_bdevs": 2, 00:14:24.174 "num_base_bdevs_discovered": 2, 00:14:24.174 "num_base_bdevs_operational": 2, 00:14:24.174 "base_bdevs_list": [ 00:14:24.174 { 00:14:24.174 "name": "pt1", 00:14:24.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.174 "is_configured": true, 00:14:24.174 "data_offset": 2048, 00:14:24.174 "data_size": 63488 00:14:24.174 }, 00:14:24.174 { 00:14:24.174 "name": "pt2", 00:14:24.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.174 "is_configured": true, 00:14:24.174 "data_offset": 2048, 00:14:24.175 "data_size": 63488 00:14:24.175 } 00:14:24.175 ] 00:14:24.175 }' 00:14:24.175 22:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.175 22:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.433 [2024-12-09 22:55:40.250737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.433 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.692 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.692 "name": "raid_bdev1", 00:14:24.692 "aliases": [ 00:14:24.692 "9076bfa4-2dd2-4562-9396-ea6ed678fa5f" 00:14:24.692 ], 00:14:24.692 "product_name": "Raid Volume", 00:14:24.692 "block_size": 512, 00:14:24.692 "num_blocks": 126976, 00:14:24.692 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:24.692 "assigned_rate_limits": { 00:14:24.692 "rw_ios_per_sec": 0, 00:14:24.692 "rw_mbytes_per_sec": 0, 00:14:24.692 "r_mbytes_per_sec": 0, 00:14:24.692 "w_mbytes_per_sec": 0 00:14:24.692 }, 00:14:24.692 "claimed": false, 00:14:24.692 "zoned": false, 00:14:24.692 "supported_io_types": { 00:14:24.692 "read": true, 00:14:24.692 "write": true, 00:14:24.692 "unmap": true, 00:14:24.692 "flush": true, 00:14:24.692 "reset": true, 00:14:24.692 "nvme_admin": false, 00:14:24.692 "nvme_io": false, 00:14:24.692 "nvme_io_md": false, 00:14:24.692 "write_zeroes": true, 00:14:24.692 "zcopy": false, 00:14:24.692 "get_zone_info": false, 00:14:24.692 "zone_management": false, 00:14:24.693 "zone_append": false, 00:14:24.693 "compare": false, 00:14:24.693 "compare_and_write": false, 00:14:24.693 "abort": false, 00:14:24.693 "seek_hole": false, 00:14:24.693 "seek_data": false, 00:14:24.693 "copy": false, 00:14:24.693 "nvme_iov_md": false 00:14:24.693 }, 00:14:24.693 "memory_domains": [ 00:14:24.693 { 00:14:24.693 "dma_device_id": "system", 00:14:24.693 "dma_device_type": 1 00:14:24.693 }, 00:14:24.693 { 00:14:24.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.693 "dma_device_type": 2 00:14:24.693 }, 00:14:24.693 { 00:14:24.693 "dma_device_id": "system", 00:14:24.693 "dma_device_type": 1 00:14:24.693 }, 00:14:24.693 { 00:14:24.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.693 "dma_device_type": 2 00:14:24.693 } 00:14:24.693 ], 00:14:24.693 "driver_specific": { 00:14:24.693 "raid": { 00:14:24.693 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:24.693 "strip_size_kb": 64, 00:14:24.693 "state": "online", 00:14:24.693 "raid_level": "concat", 00:14:24.693 "superblock": true, 00:14:24.693 "num_base_bdevs": 2, 00:14:24.693 "num_base_bdevs_discovered": 2, 00:14:24.693 "num_base_bdevs_operational": 2, 00:14:24.693 "base_bdevs_list": [ 00:14:24.693 { 00:14:24.693 "name": "pt1", 00:14:24.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.693 "is_configured": true, 00:14:24.693 "data_offset": 2048, 00:14:24.693 "data_size": 63488 00:14:24.693 }, 00:14:24.693 { 00:14:24.693 "name": "pt2", 00:14:24.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.693 "is_configured": true, 00:14:24.693 "data_offset": 2048, 00:14:24.693 "data_size": 63488 00:14:24.693 } 00:14:24.693 ] 00:14:24.693 } 00:14:24.693 } 00:14:24.693 }' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:24.693 pt2' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:24.693 [2024-12-09 22:55:40.474351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9076bfa4-2dd2-4562-9396-ea6ed678fa5f 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9076bfa4-2dd2-4562-9396-ea6ed678fa5f ']' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.693 [2024-12-09 22:55:40.521871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.693 [2024-12-09 22:55:40.521968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.693 [2024-12-09 22:55:40.522108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.693 [2024-12-09 22:55:40.522176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.693 [2024-12-09 22:55:40.522192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.693 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.954 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.955 [2024-12-09 22:55:40.653674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:24.955 [2024-12-09 22:55:40.656140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:24.955 [2024-12-09 22:55:40.656225] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:24.955 [2024-12-09 22:55:40.656294] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:24.955 [2024-12-09 22:55:40.656313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.955 [2024-12-09 22:55:40.656326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:24.955 request: 00:14:24.955 { 00:14:24.955 "name": "raid_bdev1", 00:14:24.955 "raid_level": "concat", 00:14:24.955 "base_bdevs": [ 00:14:24.955 "malloc1", 00:14:24.955 "malloc2" 00:14:24.955 ], 00:14:24.955 "strip_size_kb": 64, 00:14:24.955 "superblock": false, 00:14:24.955 "method": "bdev_raid_create", 00:14:24.955 "req_id": 1 00:14:24.955 } 00:14:24.955 Got JSON-RPC error response 00:14:24.955 response: 00:14:24.955 { 00:14:24.955 "code": -17, 00:14:24.955 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:24.955 } 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.955 [2024-12-09 22:55:40.713611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:24.955 [2024-12-09 22:55:40.713758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.955 [2024-12-09 22:55:40.713801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:24.955 [2024-12-09 22:55:40.713842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.955 [2024-12-09 22:55:40.716754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.955 [2024-12-09 22:55:40.716839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:24.955 [2024-12-09 22:55:40.716987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:24.955 [2024-12-09 22:55:40.717092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:24.955 pt1 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.955 "name": "raid_bdev1", 00:14:24.955 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:24.955 "strip_size_kb": 64, 00:14:24.955 "state": "configuring", 00:14:24.955 "raid_level": "concat", 00:14:24.955 "superblock": true, 00:14:24.955 "num_base_bdevs": 2, 00:14:24.955 "num_base_bdevs_discovered": 1, 00:14:24.955 "num_base_bdevs_operational": 2, 00:14:24.955 "base_bdevs_list": [ 00:14:24.955 { 00:14:24.955 "name": "pt1", 00:14:24.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.955 "is_configured": true, 00:14:24.955 "data_offset": 2048, 00:14:24.955 "data_size": 63488 00:14:24.955 }, 00:14:24.955 { 00:14:24.955 "name": null, 00:14:24.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.955 "is_configured": false, 00:14:24.955 "data_offset": 2048, 00:14:24.955 "data_size": 63488 00:14:24.955 } 00:14:24.955 ] 00:14:24.955 }' 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.955 22:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.530 [2024-12-09 22:55:41.176853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.530 [2024-12-09 22:55:41.176977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.530 [2024-12-09 22:55:41.177007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:25.530 [2024-12-09 22:55:41.177023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.530 [2024-12-09 22:55:41.177686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.530 [2024-12-09 22:55:41.177792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.530 [2024-12-09 22:55:41.177936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:25.530 [2024-12-09 22:55:41.177979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.530 [2024-12-09 22:55:41.178141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:25.530 [2024-12-09 22:55:41.178156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:25.530 [2024-12-09 22:55:41.178505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:25.530 [2024-12-09 22:55:41.178711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:25.530 [2024-12-09 22:55:41.178722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:25.530 [2024-12-09 22:55:41.178907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.530 pt2 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.530 "name": "raid_bdev1", 00:14:25.530 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:25.530 "strip_size_kb": 64, 00:14:25.530 "state": "online", 00:14:25.530 "raid_level": "concat", 00:14:25.530 "superblock": true, 00:14:25.530 "num_base_bdevs": 2, 00:14:25.530 "num_base_bdevs_discovered": 2, 00:14:25.530 "num_base_bdevs_operational": 2, 00:14:25.530 "base_bdevs_list": [ 00:14:25.530 { 00:14:25.530 "name": "pt1", 00:14:25.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.530 "is_configured": true, 00:14:25.530 "data_offset": 2048, 00:14:25.530 "data_size": 63488 00:14:25.530 }, 00:14:25.530 { 00:14:25.530 "name": "pt2", 00:14:25.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.530 "is_configured": true, 00:14:25.530 "data_offset": 2048, 00:14:25.530 "data_size": 63488 00:14:25.530 } 00:14:25.530 ] 00:14:25.530 }' 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.530 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.790 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.790 [2024-12-09 22:55:41.628369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.050 "name": "raid_bdev1", 00:14:26.050 "aliases": [ 00:14:26.050 "9076bfa4-2dd2-4562-9396-ea6ed678fa5f" 00:14:26.050 ], 00:14:26.050 "product_name": "Raid Volume", 00:14:26.050 "block_size": 512, 00:14:26.050 "num_blocks": 126976, 00:14:26.050 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:26.050 "assigned_rate_limits": { 00:14:26.050 "rw_ios_per_sec": 0, 00:14:26.050 "rw_mbytes_per_sec": 0, 00:14:26.050 "r_mbytes_per_sec": 0, 00:14:26.050 "w_mbytes_per_sec": 0 00:14:26.050 }, 00:14:26.050 "claimed": false, 00:14:26.050 "zoned": false, 00:14:26.050 "supported_io_types": { 00:14:26.050 "read": true, 00:14:26.050 "write": true, 00:14:26.050 "unmap": true, 00:14:26.050 "flush": true, 00:14:26.050 "reset": true, 00:14:26.050 "nvme_admin": false, 00:14:26.050 "nvme_io": false, 00:14:26.050 "nvme_io_md": false, 00:14:26.050 "write_zeroes": true, 00:14:26.050 "zcopy": false, 00:14:26.050 "get_zone_info": false, 00:14:26.050 "zone_management": false, 00:14:26.050 "zone_append": false, 00:14:26.050 "compare": false, 00:14:26.050 "compare_and_write": false, 00:14:26.050 "abort": false, 00:14:26.050 "seek_hole": false, 00:14:26.050 "seek_data": false, 00:14:26.050 "copy": false, 00:14:26.050 "nvme_iov_md": false 00:14:26.050 }, 00:14:26.050 "memory_domains": [ 00:14:26.050 { 00:14:26.050 "dma_device_id": "system", 00:14:26.050 "dma_device_type": 1 00:14:26.050 }, 00:14:26.050 { 00:14:26.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.050 "dma_device_type": 2 00:14:26.050 }, 00:14:26.050 { 00:14:26.050 "dma_device_id": "system", 00:14:26.050 "dma_device_type": 1 00:14:26.050 }, 00:14:26.050 { 00:14:26.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.050 "dma_device_type": 2 00:14:26.050 } 00:14:26.050 ], 00:14:26.050 "driver_specific": { 00:14:26.050 "raid": { 00:14:26.050 "uuid": "9076bfa4-2dd2-4562-9396-ea6ed678fa5f", 00:14:26.050 "strip_size_kb": 64, 00:14:26.050 "state": "online", 00:14:26.050 "raid_level": "concat", 00:14:26.050 "superblock": true, 00:14:26.050 "num_base_bdevs": 2, 00:14:26.050 "num_base_bdevs_discovered": 2, 00:14:26.050 "num_base_bdevs_operational": 2, 00:14:26.050 "base_bdevs_list": [ 00:14:26.050 { 00:14:26.050 "name": "pt1", 00:14:26.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.050 "is_configured": true, 00:14:26.050 "data_offset": 2048, 00:14:26.050 "data_size": 63488 00:14:26.050 }, 00:14:26.050 { 00:14:26.050 "name": "pt2", 00:14:26.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.050 "is_configured": true, 00:14:26.050 "data_offset": 2048, 00:14:26.050 "data_size": 63488 00:14:26.050 } 00:14:26.050 ] 00:14:26.050 } 00:14:26.050 } 00:14:26.050 }' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:26.050 pt2' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.050 [2024-12-09 22:55:41.883988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.050 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9076bfa4-2dd2-4562-9396-ea6ed678fa5f '!=' 9076bfa4-2dd2-4562-9396-ea6ed678fa5f ']' 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62636 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62636 ']' 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62636 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62636 00:14:26.309 killing process with pid 62636 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62636' 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62636 00:14:26.309 22:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62636 00:14:26.309 [2024-12-09 22:55:41.973555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.309 [2024-12-09 22:55:41.973703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.309 [2024-12-09 22:55:41.973793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.309 [2024-12-09 22:55:41.973808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:26.569 [2024-12-09 22:55:42.244926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.951 ************************************ 00:14:27.951 END TEST raid_superblock_test 00:14:27.951 ************************************ 00:14:27.951 22:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:27.951 00:14:27.951 real 0m4.932s 00:14:27.951 user 0m6.659s 00:14:27.951 sys 0m0.910s 00:14:27.951 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.951 22:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.951 22:55:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:14:27.951 22:55:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:27.951 22:55:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.951 22:55:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.951 ************************************ 00:14:27.951 START TEST raid_read_error_test 00:14:27.951 ************************************ 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QqyLsUFmsZ 00:14:27.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62853 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62853 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62853 ']' 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.951 22:55:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:28.210 [2024-12-09 22:55:43.845856] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:28.210 [2024-12-09 22:55:43.846013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62853 ] 00:14:28.210 [2024-12-09 22:55:44.034875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.470 [2024-12-09 22:55:44.195103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.728 [2024-12-09 22:55:44.467329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.728 [2024-12-09 22:55:44.467397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.986 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.986 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:28.986 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:28.986 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.986 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.986 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 BaseBdev1_malloc 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 true 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 [2024-12-09 22:55:44.861032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:29.246 [2024-12-09 22:55:44.861221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.246 [2024-12-09 22:55:44.861260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:29.246 [2024-12-09 22:55:44.861278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.246 [2024-12-09 22:55:44.864408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.246 [2024-12-09 22:55:44.864562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.246 BaseBdev1 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 BaseBdev2_malloc 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 true 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 [2024-12-09 22:55:44.943438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:29.246 [2024-12-09 22:55:44.943572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.246 [2024-12-09 22:55:44.943603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:29.246 [2024-12-09 22:55:44.943618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.246 [2024-12-09 22:55:44.946798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.246 [2024-12-09 22:55:44.946880] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.246 BaseBdev2 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 [2024-12-09 22:55:44.955549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.246 [2024-12-09 22:55:44.958273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.246 [2024-12-09 22:55:44.958718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:29.246 [2024-12-09 22:55:44.958757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:29.246 [2024-12-09 22:55:44.959120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:29.246 [2024-12-09 22:55:44.959359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:29.246 [2024-12-09 22:55:44.959375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:29.246 [2024-12-09 22:55:44.959766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 22:55:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.246 22:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.246 "name": "raid_bdev1", 00:14:29.246 "uuid": "ef2ef712-6ca7-43b5-ad8f-2c1ff71d04b2", 00:14:29.246 "strip_size_kb": 64, 00:14:29.246 "state": "online", 00:14:29.246 "raid_level": "concat", 00:14:29.246 "superblock": true, 00:14:29.246 "num_base_bdevs": 2, 00:14:29.246 "num_base_bdevs_discovered": 2, 00:14:29.246 "num_base_bdevs_operational": 2, 00:14:29.246 "base_bdevs_list": [ 00:14:29.246 { 00:14:29.246 "name": "BaseBdev1", 00:14:29.246 "uuid": "00fcd4db-6937-58de-9902-2651d5fbf3b6", 00:14:29.246 "is_configured": true, 00:14:29.246 "data_offset": 2048, 00:14:29.246 "data_size": 63488 00:14:29.246 }, 00:14:29.246 { 00:14:29.246 "name": "BaseBdev2", 00:14:29.246 "uuid": "01a81724-cf47-572b-b70e-5ba58fad53e8", 00:14:29.246 "is_configured": true, 00:14:29.246 "data_offset": 2048, 00:14:29.246 "data_size": 63488 00:14:29.246 } 00:14:29.246 ] 00:14:29.246 }' 00:14:29.246 22:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.246 22:55:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.819 22:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:29.819 22:55:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.819 [2024-12-09 22:55:45.520482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:30.754 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.755 "name": "raid_bdev1", 00:14:30.755 "uuid": "ef2ef712-6ca7-43b5-ad8f-2c1ff71d04b2", 00:14:30.755 "strip_size_kb": 64, 00:14:30.755 "state": "online", 00:14:30.755 "raid_level": "concat", 00:14:30.755 "superblock": true, 00:14:30.755 "num_base_bdevs": 2, 00:14:30.755 "num_base_bdevs_discovered": 2, 00:14:30.755 "num_base_bdevs_operational": 2, 00:14:30.755 "base_bdevs_list": [ 00:14:30.755 { 00:14:30.755 "name": "BaseBdev1", 00:14:30.755 "uuid": "00fcd4db-6937-58de-9902-2651d5fbf3b6", 00:14:30.755 "is_configured": true, 00:14:30.755 "data_offset": 2048, 00:14:30.755 "data_size": 63488 00:14:30.755 }, 00:14:30.755 { 00:14:30.755 "name": "BaseBdev2", 00:14:30.755 "uuid": "01a81724-cf47-572b-b70e-5ba58fad53e8", 00:14:30.755 "is_configured": true, 00:14:30.755 "data_offset": 2048, 00:14:30.755 "data_size": 63488 00:14:30.755 } 00:14:30.755 ] 00:14:30.755 }' 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.755 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.322 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.322 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.322 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.322 [2024-12-09 22:55:46.891791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.323 [2024-12-09 22:55:46.891940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.323 [2024-12-09 22:55:46.895550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.323 [2024-12-09 22:55:46.895663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.323 [2024-12-09 22:55:46.895744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.323 [2024-12-09 22:55:46.895801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:31.323 { 00:14:31.323 "results": [ 00:14:31.323 { 00:14:31.323 "job": "raid_bdev1", 00:14:31.323 "core_mask": "0x1", 00:14:31.323 "workload": "randrw", 00:14:31.323 "percentage": 50, 00:14:31.323 "status": "finished", 00:14:31.323 "queue_depth": 1, 00:14:31.323 "io_size": 131072, 00:14:31.323 "runtime": 1.37179, 00:14:31.323 "iops": 11403.348909089584, 00:14:31.323 "mibps": 1425.418613636198, 00:14:31.323 "io_failed": 1, 00:14:31.323 "io_timeout": 0, 00:14:31.323 "avg_latency_us": 122.95047782595054, 00:14:31.323 "min_latency_us": 28.50655021834061, 00:14:31.323 "max_latency_us": 1767.1825327510917 00:14:31.323 } 00:14:31.323 ], 00:14:31.323 "core_count": 1 00:14:31.323 } 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62853 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62853 ']' 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62853 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62853 00:14:31.323 killing process with pid 62853 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62853' 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62853 00:14:31.323 [2024-12-09 22:55:46.942181] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.323 22:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62853 00:14:31.323 [2024-12-09 22:55:47.119956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.229 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:33.229 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:33.229 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QqyLsUFmsZ 00:14:33.229 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:33.229 ************************************ 00:14:33.229 END TEST raid_read_error_test 00:14:33.229 ************************************ 00:14:33.229 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:33.230 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.230 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:33.230 22:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:33.230 00:14:33.230 real 0m4.932s 00:14:33.230 user 0m5.779s 00:14:33.230 sys 0m0.715s 00:14:33.230 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.230 22:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.230 22:55:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:14:33.230 22:55:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.230 22:55:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.230 22:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.230 ************************************ 00:14:33.230 START TEST raid_write_error_test 00:14:33.230 ************************************ 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m0KjtPjS4y 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63004 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63004 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63004 ']' 00:14:33.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.230 22:55:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.230 [2024-12-09 22:55:48.849801] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:33.230 [2024-12-09 22:55:48.849966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63004 ] 00:14:33.230 [2024-12-09 22:55:49.040477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.489 [2024-12-09 22:55:49.202511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.749 [2024-12-09 22:55:49.482719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.749 [2024-12-09 22:55:49.482775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.008 BaseBdev1_malloc 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.008 true 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.008 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.008 [2024-12-09 22:55:49.856859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:34.008 [2024-12-09 22:55:49.856948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.008 [2024-12-09 22:55:49.856980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:34.008 [2024-12-09 22:55:49.856994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.008 [2024-12-09 22:55:49.859991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.008 [2024-12-09 22:55:49.860047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.268 BaseBdev1 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 BaseBdev2_malloc 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 true 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 [2024-12-09 22:55:49.936717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:34.268 [2024-12-09 22:55:49.936797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.268 [2024-12-09 22:55:49.936823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:34.268 [2024-12-09 22:55:49.936838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.268 [2024-12-09 22:55:49.940002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.268 [2024-12-09 22:55:49.940047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.268 BaseBdev2 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 [2024-12-09 22:55:49.948841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.268 [2024-12-09 22:55:49.951384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.268 [2024-12-09 22:55:49.951678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:34.268 [2024-12-09 22:55:49.951705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.268 [2024-12-09 22:55:49.952049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.268 [2024-12-09 22:55:49.952294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:34.268 [2024-12-09 22:55:49.952319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:34.268 [2024-12-09 22:55:49.952565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 22:55:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.268 22:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.268 "name": "raid_bdev1", 00:14:34.268 "uuid": "aa68673c-fc07-4b02-b1dd-24e06177d705", 00:14:34.268 "strip_size_kb": 64, 00:14:34.268 "state": "online", 00:14:34.268 "raid_level": "concat", 00:14:34.268 "superblock": true, 00:14:34.268 "num_base_bdevs": 2, 00:14:34.268 "num_base_bdevs_discovered": 2, 00:14:34.268 "num_base_bdevs_operational": 2, 00:14:34.268 "base_bdevs_list": [ 00:14:34.268 { 00:14:34.268 "name": "BaseBdev1", 00:14:34.268 "uuid": "ccdaae50-4096-5234-a8de-553e1bdafddc", 00:14:34.268 "is_configured": true, 00:14:34.268 "data_offset": 2048, 00:14:34.268 "data_size": 63488 00:14:34.268 }, 00:14:34.268 { 00:14:34.268 "name": "BaseBdev2", 00:14:34.268 "uuid": "5c251e43-d31d-5402-aa03-e6fc1a46b07b", 00:14:34.268 "is_configured": true, 00:14:34.268 "data_offset": 2048, 00:14:34.268 "data_size": 63488 00:14:34.268 } 00:14:34.268 ] 00:14:34.268 }' 00:14:34.268 22:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.268 22:55:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.547 22:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.547 22:55:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:34.805 [2024-12-09 22:55:50.525807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.740 "name": "raid_bdev1", 00:14:35.740 "uuid": "aa68673c-fc07-4b02-b1dd-24e06177d705", 00:14:35.740 "strip_size_kb": 64, 00:14:35.740 "state": "online", 00:14:35.740 "raid_level": "concat", 00:14:35.740 "superblock": true, 00:14:35.740 "num_base_bdevs": 2, 00:14:35.740 "num_base_bdevs_discovered": 2, 00:14:35.740 "num_base_bdevs_operational": 2, 00:14:35.740 "base_bdevs_list": [ 00:14:35.740 { 00:14:35.740 "name": "BaseBdev1", 00:14:35.740 "uuid": "ccdaae50-4096-5234-a8de-553e1bdafddc", 00:14:35.740 "is_configured": true, 00:14:35.740 "data_offset": 2048, 00:14:35.740 "data_size": 63488 00:14:35.740 }, 00:14:35.740 { 00:14:35.740 "name": "BaseBdev2", 00:14:35.740 "uuid": "5c251e43-d31d-5402-aa03-e6fc1a46b07b", 00:14:35.740 "is_configured": true, 00:14:35.740 "data_offset": 2048, 00:14:35.740 "data_size": 63488 00:14:35.740 } 00:14:35.740 ] 00:14:35.740 }' 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.740 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.309 [2024-12-09 22:55:51.880787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.309 [2024-12-09 22:55:51.880837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.309 [2024-12-09 22:55:51.884219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.309 [2024-12-09 22:55:51.884283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.309 [2024-12-09 22:55:51.884330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.309 [2024-12-09 22:55:51.884351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:36.309 { 00:14:36.309 "results": [ 00:14:36.309 { 00:14:36.309 "job": "raid_bdev1", 00:14:36.309 "core_mask": "0x1", 00:14:36.309 "workload": "randrw", 00:14:36.309 "percentage": 50, 00:14:36.309 "status": "finished", 00:14:36.309 "queue_depth": 1, 00:14:36.309 "io_size": 131072, 00:14:36.309 "runtime": 1.3552, 00:14:36.309 "iops": 11277.302243211334, 00:14:36.309 "mibps": 1409.6627804014167, 00:14:36.309 "io_failed": 1, 00:14:36.309 "io_timeout": 0, 00:14:36.309 "avg_latency_us": 124.04591587057962, 00:14:36.309 "min_latency_us": 32.19563318777293, 00:14:36.309 "max_latency_us": 1652.709170305677 00:14:36.309 } 00:14:36.309 ], 00:14:36.309 "core_count": 1 00:14:36.309 } 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63004 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63004 ']' 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63004 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63004 00:14:36.309 killing process with pid 63004 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63004' 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63004 00:14:36.309 [2024-12-09 22:55:51.919081] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.309 22:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63004 00:14:36.309 [2024-12-09 22:55:52.082852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m0KjtPjS4y 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:37.714 00:14:37.714 real 0m4.746s 00:14:37.714 user 0m5.629s 00:14:37.714 sys 0m0.696s 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.714 22:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.714 ************************************ 00:14:37.714 END TEST raid_write_error_test 00:14:37.714 ************************************ 00:14:37.714 22:55:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:37.715 22:55:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:37.715 22:55:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:37.715 22:55:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.715 22:55:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.715 ************************************ 00:14:37.715 START TEST raid_state_function_test 00:14:37.715 ************************************ 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63142 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63142' 00:14:37.715 Process raid pid: 63142 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63142 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63142 ']' 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.715 22:55:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.974 [2024-12-09 22:55:53.656066] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:37.974 [2024-12-09 22:55:53.656209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.234 [2024-12-09 22:55:53.837188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.234 [2024-12-09 22:55:53.992020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.493 [2024-12-09 22:55:54.257229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.493 [2024-12-09 22:55:54.257314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.752 [2024-12-09 22:55:54.533370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.752 [2024-12-09 22:55:54.533447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.752 [2024-12-09 22:55:54.533470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.752 [2024-12-09 22:55:54.533484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.752 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.753 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.753 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.753 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.753 "name": "Existed_Raid", 00:14:38.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.753 "strip_size_kb": 0, 00:14:38.753 "state": "configuring", 00:14:38.753 "raid_level": "raid1", 00:14:38.753 "superblock": false, 00:14:38.753 "num_base_bdevs": 2, 00:14:38.753 "num_base_bdevs_discovered": 0, 00:14:38.753 "num_base_bdevs_operational": 2, 00:14:38.753 "base_bdevs_list": [ 00:14:38.753 { 00:14:38.753 "name": "BaseBdev1", 00:14:38.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.753 "is_configured": false, 00:14:38.753 "data_offset": 0, 00:14:38.753 "data_size": 0 00:14:38.753 }, 00:14:38.753 { 00:14:38.753 "name": "BaseBdev2", 00:14:38.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.753 "is_configured": false, 00:14:38.753 "data_offset": 0, 00:14:38.753 "data_size": 0 00:14:38.753 } 00:14:38.753 ] 00:14:38.753 }' 00:14:38.753 22:55:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.753 22:55:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 [2024-12-09 22:55:55.020683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.322 [2024-12-09 22:55:55.020734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 [2024-12-09 22:55:55.032637] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.322 [2024-12-09 22:55:55.032693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.322 [2024-12-09 22:55:55.032705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.322 [2024-12-09 22:55:55.032719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 [2024-12-09 22:55:55.090101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.322 BaseBdev1 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.322 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 [ 00:14:39.322 { 00:14:39.322 "name": "BaseBdev1", 00:14:39.322 "aliases": [ 00:14:39.322 "f5966db8-320a-40bb-b9d8-48a14954b4a5" 00:14:39.322 ], 00:14:39.322 "product_name": "Malloc disk", 00:14:39.322 "block_size": 512, 00:14:39.322 "num_blocks": 65536, 00:14:39.322 "uuid": "f5966db8-320a-40bb-b9d8-48a14954b4a5", 00:14:39.322 "assigned_rate_limits": { 00:14:39.322 "rw_ios_per_sec": 0, 00:14:39.322 "rw_mbytes_per_sec": 0, 00:14:39.322 "r_mbytes_per_sec": 0, 00:14:39.322 "w_mbytes_per_sec": 0 00:14:39.322 }, 00:14:39.322 "claimed": true, 00:14:39.322 "claim_type": "exclusive_write", 00:14:39.322 "zoned": false, 00:14:39.322 "supported_io_types": { 00:14:39.322 "read": true, 00:14:39.322 "write": true, 00:14:39.322 "unmap": true, 00:14:39.322 "flush": true, 00:14:39.322 "reset": true, 00:14:39.322 "nvme_admin": false, 00:14:39.322 "nvme_io": false, 00:14:39.322 "nvme_io_md": false, 00:14:39.322 "write_zeroes": true, 00:14:39.322 "zcopy": true, 00:14:39.322 "get_zone_info": false, 00:14:39.322 "zone_management": false, 00:14:39.322 "zone_append": false, 00:14:39.322 "compare": false, 00:14:39.322 "compare_and_write": false, 00:14:39.322 "abort": true, 00:14:39.322 "seek_hole": false, 00:14:39.322 "seek_data": false, 00:14:39.322 "copy": true, 00:14:39.322 "nvme_iov_md": false 00:14:39.322 }, 00:14:39.322 "memory_domains": [ 00:14:39.322 { 00:14:39.322 "dma_device_id": "system", 00:14:39.322 "dma_device_type": 1 00:14:39.322 }, 00:14:39.322 { 00:14:39.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.322 "dma_device_type": 2 00:14:39.322 } 00:14:39.322 ], 00:14:39.322 "driver_specific": {} 00:14:39.322 } 00:14:39.322 ] 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.323 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.582 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.582 "name": "Existed_Raid", 00:14:39.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.582 "strip_size_kb": 0, 00:14:39.582 "state": "configuring", 00:14:39.582 "raid_level": "raid1", 00:14:39.582 "superblock": false, 00:14:39.582 "num_base_bdevs": 2, 00:14:39.582 "num_base_bdevs_discovered": 1, 00:14:39.582 "num_base_bdevs_operational": 2, 00:14:39.582 "base_bdevs_list": [ 00:14:39.582 { 00:14:39.582 "name": "BaseBdev1", 00:14:39.582 "uuid": "f5966db8-320a-40bb-b9d8-48a14954b4a5", 00:14:39.582 "is_configured": true, 00:14:39.582 "data_offset": 0, 00:14:39.582 "data_size": 65536 00:14:39.582 }, 00:14:39.582 { 00:14:39.582 "name": "BaseBdev2", 00:14:39.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.582 "is_configured": false, 00:14:39.582 "data_offset": 0, 00:14:39.582 "data_size": 0 00:14:39.582 } 00:14:39.582 ] 00:14:39.582 }' 00:14:39.582 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.582 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 [2024-12-09 22:55:55.597355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.841 [2024-12-09 22:55:55.597438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 [2024-12-09 22:55:55.605401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.841 [2024-12-09 22:55:55.607706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.841 [2024-12-09 22:55:55.607762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.841 "name": "Existed_Raid", 00:14:39.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.841 "strip_size_kb": 0, 00:14:39.841 "state": "configuring", 00:14:39.841 "raid_level": "raid1", 00:14:39.841 "superblock": false, 00:14:39.841 "num_base_bdevs": 2, 00:14:39.841 "num_base_bdevs_discovered": 1, 00:14:39.841 "num_base_bdevs_operational": 2, 00:14:39.841 "base_bdevs_list": [ 00:14:39.841 { 00:14:39.841 "name": "BaseBdev1", 00:14:39.841 "uuid": "f5966db8-320a-40bb-b9d8-48a14954b4a5", 00:14:39.841 "is_configured": true, 00:14:39.841 "data_offset": 0, 00:14:39.841 "data_size": 65536 00:14:39.841 }, 00:14:39.841 { 00:14:39.841 "name": "BaseBdev2", 00:14:39.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.841 "is_configured": false, 00:14:39.841 "data_offset": 0, 00:14:39.841 "data_size": 0 00:14:39.841 } 00:14:39.841 ] 00:14:39.841 }' 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.841 22:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.430 [2024-12-09 22:55:56.095957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.430 [2024-12-09 22:55:56.096036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:40.430 [2024-12-09 22:55:56.096045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:40.430 [2024-12-09 22:55:56.096365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:40.430 [2024-12-09 22:55:56.096640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:40.430 [2024-12-09 22:55:56.096663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:40.430 [2024-12-09 22:55:56.097021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.430 BaseBdev2 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.430 [ 00:14:40.430 { 00:14:40.430 "name": "BaseBdev2", 00:14:40.430 "aliases": [ 00:14:40.430 "72e56950-1327-416b-bf91-a2943f05c8e8" 00:14:40.430 ], 00:14:40.430 "product_name": "Malloc disk", 00:14:40.430 "block_size": 512, 00:14:40.430 "num_blocks": 65536, 00:14:40.430 "uuid": "72e56950-1327-416b-bf91-a2943f05c8e8", 00:14:40.430 "assigned_rate_limits": { 00:14:40.430 "rw_ios_per_sec": 0, 00:14:40.430 "rw_mbytes_per_sec": 0, 00:14:40.430 "r_mbytes_per_sec": 0, 00:14:40.430 "w_mbytes_per_sec": 0 00:14:40.430 }, 00:14:40.430 "claimed": true, 00:14:40.430 "claim_type": "exclusive_write", 00:14:40.430 "zoned": false, 00:14:40.430 "supported_io_types": { 00:14:40.430 "read": true, 00:14:40.430 "write": true, 00:14:40.430 "unmap": true, 00:14:40.430 "flush": true, 00:14:40.430 "reset": true, 00:14:40.430 "nvme_admin": false, 00:14:40.430 "nvme_io": false, 00:14:40.430 "nvme_io_md": false, 00:14:40.430 "write_zeroes": true, 00:14:40.430 "zcopy": true, 00:14:40.430 "get_zone_info": false, 00:14:40.430 "zone_management": false, 00:14:40.430 "zone_append": false, 00:14:40.430 "compare": false, 00:14:40.430 "compare_and_write": false, 00:14:40.430 "abort": true, 00:14:40.430 "seek_hole": false, 00:14:40.430 "seek_data": false, 00:14:40.430 "copy": true, 00:14:40.430 "nvme_iov_md": false 00:14:40.430 }, 00:14:40.430 "memory_domains": [ 00:14:40.430 { 00:14:40.430 "dma_device_id": "system", 00:14:40.430 "dma_device_type": 1 00:14:40.430 }, 00:14:40.430 { 00:14:40.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.430 "dma_device_type": 2 00:14:40.430 } 00:14:40.430 ], 00:14:40.430 "driver_specific": {} 00:14:40.430 } 00:14:40.430 ] 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.430 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.430 "name": "Existed_Raid", 00:14:40.430 "uuid": "563365ed-998c-43b2-a2b7-2fdd54f66469", 00:14:40.430 "strip_size_kb": 0, 00:14:40.430 "state": "online", 00:14:40.430 "raid_level": "raid1", 00:14:40.430 "superblock": false, 00:14:40.430 "num_base_bdevs": 2, 00:14:40.430 "num_base_bdevs_discovered": 2, 00:14:40.430 "num_base_bdevs_operational": 2, 00:14:40.430 "base_bdevs_list": [ 00:14:40.430 { 00:14:40.430 "name": "BaseBdev1", 00:14:40.430 "uuid": "f5966db8-320a-40bb-b9d8-48a14954b4a5", 00:14:40.430 "is_configured": true, 00:14:40.430 "data_offset": 0, 00:14:40.430 "data_size": 65536 00:14:40.430 }, 00:14:40.431 { 00:14:40.431 "name": "BaseBdev2", 00:14:40.431 "uuid": "72e56950-1327-416b-bf91-a2943f05c8e8", 00:14:40.431 "is_configured": true, 00:14:40.431 "data_offset": 0, 00:14:40.431 "data_size": 65536 00:14:40.431 } 00:14:40.431 ] 00:14:40.431 }' 00:14:40.431 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.431 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.000 [2024-12-09 22:55:56.575523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.000 "name": "Existed_Raid", 00:14:41.000 "aliases": [ 00:14:41.000 "563365ed-998c-43b2-a2b7-2fdd54f66469" 00:14:41.000 ], 00:14:41.000 "product_name": "Raid Volume", 00:14:41.000 "block_size": 512, 00:14:41.000 "num_blocks": 65536, 00:14:41.000 "uuid": "563365ed-998c-43b2-a2b7-2fdd54f66469", 00:14:41.000 "assigned_rate_limits": { 00:14:41.000 "rw_ios_per_sec": 0, 00:14:41.000 "rw_mbytes_per_sec": 0, 00:14:41.000 "r_mbytes_per_sec": 0, 00:14:41.000 "w_mbytes_per_sec": 0 00:14:41.000 }, 00:14:41.000 "claimed": false, 00:14:41.000 "zoned": false, 00:14:41.000 "supported_io_types": { 00:14:41.000 "read": true, 00:14:41.000 "write": true, 00:14:41.000 "unmap": false, 00:14:41.000 "flush": false, 00:14:41.000 "reset": true, 00:14:41.000 "nvme_admin": false, 00:14:41.000 "nvme_io": false, 00:14:41.000 "nvme_io_md": false, 00:14:41.000 "write_zeroes": true, 00:14:41.000 "zcopy": false, 00:14:41.000 "get_zone_info": false, 00:14:41.000 "zone_management": false, 00:14:41.000 "zone_append": false, 00:14:41.000 "compare": false, 00:14:41.000 "compare_and_write": false, 00:14:41.000 "abort": false, 00:14:41.000 "seek_hole": false, 00:14:41.000 "seek_data": false, 00:14:41.000 "copy": false, 00:14:41.000 "nvme_iov_md": false 00:14:41.000 }, 00:14:41.000 "memory_domains": [ 00:14:41.000 { 00:14:41.000 "dma_device_id": "system", 00:14:41.000 "dma_device_type": 1 00:14:41.000 }, 00:14:41.000 { 00:14:41.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.000 "dma_device_type": 2 00:14:41.000 }, 00:14:41.000 { 00:14:41.000 "dma_device_id": "system", 00:14:41.000 "dma_device_type": 1 00:14:41.000 }, 00:14:41.000 { 00:14:41.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.000 "dma_device_type": 2 00:14:41.000 } 00:14:41.000 ], 00:14:41.000 "driver_specific": { 00:14:41.000 "raid": { 00:14:41.000 "uuid": "563365ed-998c-43b2-a2b7-2fdd54f66469", 00:14:41.000 "strip_size_kb": 0, 00:14:41.000 "state": "online", 00:14:41.000 "raid_level": "raid1", 00:14:41.000 "superblock": false, 00:14:41.000 "num_base_bdevs": 2, 00:14:41.000 "num_base_bdevs_discovered": 2, 00:14:41.000 "num_base_bdevs_operational": 2, 00:14:41.000 "base_bdevs_list": [ 00:14:41.000 { 00:14:41.000 "name": "BaseBdev1", 00:14:41.000 "uuid": "f5966db8-320a-40bb-b9d8-48a14954b4a5", 00:14:41.000 "is_configured": true, 00:14:41.000 "data_offset": 0, 00:14:41.000 "data_size": 65536 00:14:41.000 }, 00:14:41.000 { 00:14:41.000 "name": "BaseBdev2", 00:14:41.000 "uuid": "72e56950-1327-416b-bf91-a2943f05c8e8", 00:14:41.000 "is_configured": true, 00:14:41.000 "data_offset": 0, 00:14:41.000 "data_size": 65536 00:14:41.000 } 00:14:41.000 ] 00:14:41.000 } 00:14:41.000 } 00:14:41.000 }' 00:14:41.000 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:41.001 BaseBdev2' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.001 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.001 [2024-12-09 22:55:56.826822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.260 "name": "Existed_Raid", 00:14:41.260 "uuid": "563365ed-998c-43b2-a2b7-2fdd54f66469", 00:14:41.260 "strip_size_kb": 0, 00:14:41.260 "state": "online", 00:14:41.260 "raid_level": "raid1", 00:14:41.260 "superblock": false, 00:14:41.260 "num_base_bdevs": 2, 00:14:41.260 "num_base_bdevs_discovered": 1, 00:14:41.260 "num_base_bdevs_operational": 1, 00:14:41.260 "base_bdevs_list": [ 00:14:41.260 { 00:14:41.260 "name": null, 00:14:41.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.260 "is_configured": false, 00:14:41.260 "data_offset": 0, 00:14:41.260 "data_size": 65536 00:14:41.260 }, 00:14:41.260 { 00:14:41.260 "name": "BaseBdev2", 00:14:41.260 "uuid": "72e56950-1327-416b-bf91-a2943f05c8e8", 00:14:41.260 "is_configured": true, 00:14:41.260 "data_offset": 0, 00:14:41.260 "data_size": 65536 00:14:41.260 } 00:14:41.260 ] 00:14:41.260 }' 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.260 22:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.520 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:41.520 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:41.520 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:41.520 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.520 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.520 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.779 [2024-12-09 22:55:57.423992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.779 [2024-12-09 22:55:57.424124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.779 [2024-12-09 22:55:57.539037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.779 [2024-12-09 22:55:57.539126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.779 [2024-12-09 22:55:57.539143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63142 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63142 ']' 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63142 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.779 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63142 00:14:42.037 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.037 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.037 killing process with pid 63142 00:14:42.037 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63142' 00:14:42.037 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63142 00:14:42.037 [2024-12-09 22:55:57.638056] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.037 22:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63142 00:14:42.037 [2024-12-09 22:55:57.658064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.414 22:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:43.414 00:14:43.414 real 0m5.502s 00:14:43.414 user 0m7.669s 00:14:43.414 sys 0m1.043s 00:14:43.414 22:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.414 22:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.414 ************************************ 00:14:43.414 END TEST raid_state_function_test 00:14:43.414 ************************************ 00:14:43.414 22:55:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:43.414 22:55:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.414 22:55:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.414 22:55:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.414 ************************************ 00:14:43.414 START TEST raid_state_function_test_sb 00:14:43.414 ************************************ 00:14:43.414 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63401 00:14:43.415 Process raid pid: 63401 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63401' 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63401 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63401 ']' 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.415 22:55:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.415 [2024-12-09 22:55:59.219952] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:43.415 [2024-12-09 22:55:59.220160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.674 [2024-12-09 22:55:59.409885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.933 [2024-12-09 22:55:59.569668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.191 [2024-12-09 22:55:59.843795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.191 [2024-12-09 22:55:59.843861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.490 [2024-12-09 22:56:00.160217] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.490 [2024-12-09 22:56:00.160293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.490 [2024-12-09 22:56:00.160306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.490 [2024-12-09 22:56:00.160319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.490 "name": "Existed_Raid", 00:14:44.490 "uuid": "b8d164cf-ddea-49cb-b1c5-21e15ca18b8d", 00:14:44.490 "strip_size_kb": 0, 00:14:44.490 "state": "configuring", 00:14:44.490 "raid_level": "raid1", 00:14:44.490 "superblock": true, 00:14:44.490 "num_base_bdevs": 2, 00:14:44.490 "num_base_bdevs_discovered": 0, 00:14:44.490 "num_base_bdevs_operational": 2, 00:14:44.490 "base_bdevs_list": [ 00:14:44.490 { 00:14:44.490 "name": "BaseBdev1", 00:14:44.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.490 "is_configured": false, 00:14:44.490 "data_offset": 0, 00:14:44.490 "data_size": 0 00:14:44.490 }, 00:14:44.490 { 00:14:44.490 "name": "BaseBdev2", 00:14:44.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.490 "is_configured": false, 00:14:44.490 "data_offset": 0, 00:14:44.490 "data_size": 0 00:14:44.490 } 00:14:44.490 ] 00:14:44.490 }' 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.490 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.063 [2024-12-09 22:56:00.643326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.063 [2024-12-09 22:56:00.643378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.063 [2024-12-09 22:56:00.651329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.063 [2024-12-09 22:56:00.651390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.063 [2024-12-09 22:56:00.651403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.063 [2024-12-09 22:56:00.651418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.063 [2024-12-09 22:56:00.707084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.063 BaseBdev1 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.063 [ 00:14:45.063 { 00:14:45.063 "name": "BaseBdev1", 00:14:45.063 "aliases": [ 00:14:45.063 "50418298-1f72-46fe-895a-0f18d7784141" 00:14:45.063 ], 00:14:45.063 "product_name": "Malloc disk", 00:14:45.063 "block_size": 512, 00:14:45.063 "num_blocks": 65536, 00:14:45.063 "uuid": "50418298-1f72-46fe-895a-0f18d7784141", 00:14:45.063 "assigned_rate_limits": { 00:14:45.063 "rw_ios_per_sec": 0, 00:14:45.063 "rw_mbytes_per_sec": 0, 00:14:45.063 "r_mbytes_per_sec": 0, 00:14:45.063 "w_mbytes_per_sec": 0 00:14:45.063 }, 00:14:45.063 "claimed": true, 00:14:45.063 "claim_type": "exclusive_write", 00:14:45.063 "zoned": false, 00:14:45.063 "supported_io_types": { 00:14:45.063 "read": true, 00:14:45.063 "write": true, 00:14:45.063 "unmap": true, 00:14:45.063 "flush": true, 00:14:45.063 "reset": true, 00:14:45.063 "nvme_admin": false, 00:14:45.063 "nvme_io": false, 00:14:45.063 "nvme_io_md": false, 00:14:45.063 "write_zeroes": true, 00:14:45.063 "zcopy": true, 00:14:45.063 "get_zone_info": false, 00:14:45.063 "zone_management": false, 00:14:45.063 "zone_append": false, 00:14:45.063 "compare": false, 00:14:45.063 "compare_and_write": false, 00:14:45.063 "abort": true, 00:14:45.063 "seek_hole": false, 00:14:45.063 "seek_data": false, 00:14:45.063 "copy": true, 00:14:45.063 "nvme_iov_md": false 00:14:45.063 }, 00:14:45.063 "memory_domains": [ 00:14:45.063 { 00:14:45.063 "dma_device_id": "system", 00:14:45.063 "dma_device_type": 1 00:14:45.063 }, 00:14:45.063 { 00:14:45.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.063 "dma_device_type": 2 00:14:45.063 } 00:14:45.063 ], 00:14:45.063 "driver_specific": {} 00:14:45.063 } 00:14:45.063 ] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.063 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.064 "name": "Existed_Raid", 00:14:45.064 "uuid": "eb67c3c9-5dd9-4268-bf2c-c1292032ad44", 00:14:45.064 "strip_size_kb": 0, 00:14:45.064 "state": "configuring", 00:14:45.064 "raid_level": "raid1", 00:14:45.064 "superblock": true, 00:14:45.064 "num_base_bdevs": 2, 00:14:45.064 "num_base_bdevs_discovered": 1, 00:14:45.064 "num_base_bdevs_operational": 2, 00:14:45.064 "base_bdevs_list": [ 00:14:45.064 { 00:14:45.064 "name": "BaseBdev1", 00:14:45.064 "uuid": "50418298-1f72-46fe-895a-0f18d7784141", 00:14:45.064 "is_configured": true, 00:14:45.064 "data_offset": 2048, 00:14:45.064 "data_size": 63488 00:14:45.064 }, 00:14:45.064 { 00:14:45.064 "name": "BaseBdev2", 00:14:45.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.064 "is_configured": false, 00:14:45.064 "data_offset": 0, 00:14:45.064 "data_size": 0 00:14:45.064 } 00:14:45.064 ] 00:14:45.064 }' 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.064 22:56:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.630 [2024-12-09 22:56:01.242236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.630 [2024-12-09 22:56:01.242314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.630 [2024-12-09 22:56:01.254294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.630 [2024-12-09 22:56:01.256567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.630 [2024-12-09 22:56:01.256616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.630 "name": "Existed_Raid", 00:14:45.630 "uuid": "928bbebf-76e1-4bef-90d8-4b7580b08cea", 00:14:45.630 "strip_size_kb": 0, 00:14:45.630 "state": "configuring", 00:14:45.630 "raid_level": "raid1", 00:14:45.630 "superblock": true, 00:14:45.630 "num_base_bdevs": 2, 00:14:45.630 "num_base_bdevs_discovered": 1, 00:14:45.630 "num_base_bdevs_operational": 2, 00:14:45.630 "base_bdevs_list": [ 00:14:45.630 { 00:14:45.630 "name": "BaseBdev1", 00:14:45.630 "uuid": "50418298-1f72-46fe-895a-0f18d7784141", 00:14:45.630 "is_configured": true, 00:14:45.630 "data_offset": 2048, 00:14:45.630 "data_size": 63488 00:14:45.630 }, 00:14:45.630 { 00:14:45.630 "name": "BaseBdev2", 00:14:45.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.630 "is_configured": false, 00:14:45.630 "data_offset": 0, 00:14:45.630 "data_size": 0 00:14:45.630 } 00:14:45.630 ] 00:14:45.630 }' 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.630 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.889 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.889 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.889 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.148 [2024-12-09 22:56:01.793114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.148 [2024-12-09 22:56:01.793480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:46.148 [2024-12-09 22:56:01.793500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:46.148 [2024-12-09 22:56:01.793797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:46.148 [2024-12-09 22:56:01.794014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:46.148 [2024-12-09 22:56:01.794050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:46.148 [2024-12-09 22:56:01.794219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.148 BaseBdev2 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.148 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.148 [ 00:14:46.148 { 00:14:46.148 "name": "BaseBdev2", 00:14:46.148 "aliases": [ 00:14:46.148 "3f17eb6e-6d2f-4e1a-b5dc-45a1feca94c8" 00:14:46.148 ], 00:14:46.148 "product_name": "Malloc disk", 00:14:46.148 "block_size": 512, 00:14:46.148 "num_blocks": 65536, 00:14:46.148 "uuid": "3f17eb6e-6d2f-4e1a-b5dc-45a1feca94c8", 00:14:46.148 "assigned_rate_limits": { 00:14:46.148 "rw_ios_per_sec": 0, 00:14:46.148 "rw_mbytes_per_sec": 0, 00:14:46.148 "r_mbytes_per_sec": 0, 00:14:46.148 "w_mbytes_per_sec": 0 00:14:46.148 }, 00:14:46.148 "claimed": true, 00:14:46.148 "claim_type": "exclusive_write", 00:14:46.148 "zoned": false, 00:14:46.148 "supported_io_types": { 00:14:46.148 "read": true, 00:14:46.148 "write": true, 00:14:46.149 "unmap": true, 00:14:46.149 "flush": true, 00:14:46.149 "reset": true, 00:14:46.149 "nvme_admin": false, 00:14:46.149 "nvme_io": false, 00:14:46.149 "nvme_io_md": false, 00:14:46.149 "write_zeroes": true, 00:14:46.149 "zcopy": true, 00:14:46.149 "get_zone_info": false, 00:14:46.149 "zone_management": false, 00:14:46.149 "zone_append": false, 00:14:46.149 "compare": false, 00:14:46.149 "compare_and_write": false, 00:14:46.149 "abort": true, 00:14:46.149 "seek_hole": false, 00:14:46.149 "seek_data": false, 00:14:46.149 "copy": true, 00:14:46.149 "nvme_iov_md": false 00:14:46.149 }, 00:14:46.149 "memory_domains": [ 00:14:46.149 { 00:14:46.149 "dma_device_id": "system", 00:14:46.149 "dma_device_type": 1 00:14:46.149 }, 00:14:46.149 { 00:14:46.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.149 "dma_device_type": 2 00:14:46.149 } 00:14:46.149 ], 00:14:46.149 "driver_specific": {} 00:14:46.149 } 00:14:46.149 ] 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.149 "name": "Existed_Raid", 00:14:46.149 "uuid": "928bbebf-76e1-4bef-90d8-4b7580b08cea", 00:14:46.149 "strip_size_kb": 0, 00:14:46.149 "state": "online", 00:14:46.149 "raid_level": "raid1", 00:14:46.149 "superblock": true, 00:14:46.149 "num_base_bdevs": 2, 00:14:46.149 "num_base_bdevs_discovered": 2, 00:14:46.149 "num_base_bdevs_operational": 2, 00:14:46.149 "base_bdevs_list": [ 00:14:46.149 { 00:14:46.149 "name": "BaseBdev1", 00:14:46.149 "uuid": "50418298-1f72-46fe-895a-0f18d7784141", 00:14:46.149 "is_configured": true, 00:14:46.149 "data_offset": 2048, 00:14:46.149 "data_size": 63488 00:14:46.149 }, 00:14:46.149 { 00:14:46.149 "name": "BaseBdev2", 00:14:46.149 "uuid": "3f17eb6e-6d2f-4e1a-b5dc-45a1feca94c8", 00:14:46.149 "is_configured": true, 00:14:46.149 "data_offset": 2048, 00:14:46.149 "data_size": 63488 00:14:46.149 } 00:14:46.149 ] 00:14:46.149 }' 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.149 22:56:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.715 [2024-12-09 22:56:02.292958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.715 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:46.715 "name": "Existed_Raid", 00:14:46.715 "aliases": [ 00:14:46.715 "928bbebf-76e1-4bef-90d8-4b7580b08cea" 00:14:46.715 ], 00:14:46.715 "product_name": "Raid Volume", 00:14:46.715 "block_size": 512, 00:14:46.715 "num_blocks": 63488, 00:14:46.716 "uuid": "928bbebf-76e1-4bef-90d8-4b7580b08cea", 00:14:46.716 "assigned_rate_limits": { 00:14:46.716 "rw_ios_per_sec": 0, 00:14:46.716 "rw_mbytes_per_sec": 0, 00:14:46.716 "r_mbytes_per_sec": 0, 00:14:46.716 "w_mbytes_per_sec": 0 00:14:46.716 }, 00:14:46.716 "claimed": false, 00:14:46.716 "zoned": false, 00:14:46.716 "supported_io_types": { 00:14:46.716 "read": true, 00:14:46.716 "write": true, 00:14:46.716 "unmap": false, 00:14:46.716 "flush": false, 00:14:46.716 "reset": true, 00:14:46.716 "nvme_admin": false, 00:14:46.716 "nvme_io": false, 00:14:46.716 "nvme_io_md": false, 00:14:46.716 "write_zeroes": true, 00:14:46.716 "zcopy": false, 00:14:46.716 "get_zone_info": false, 00:14:46.716 "zone_management": false, 00:14:46.716 "zone_append": false, 00:14:46.716 "compare": false, 00:14:46.716 "compare_and_write": false, 00:14:46.716 "abort": false, 00:14:46.716 "seek_hole": false, 00:14:46.716 "seek_data": false, 00:14:46.716 "copy": false, 00:14:46.716 "nvme_iov_md": false 00:14:46.716 }, 00:14:46.716 "memory_domains": [ 00:14:46.716 { 00:14:46.716 "dma_device_id": "system", 00:14:46.716 "dma_device_type": 1 00:14:46.716 }, 00:14:46.716 { 00:14:46.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.716 "dma_device_type": 2 00:14:46.716 }, 00:14:46.716 { 00:14:46.716 "dma_device_id": "system", 00:14:46.716 "dma_device_type": 1 00:14:46.716 }, 00:14:46.716 { 00:14:46.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.716 "dma_device_type": 2 00:14:46.716 } 00:14:46.716 ], 00:14:46.716 "driver_specific": { 00:14:46.716 "raid": { 00:14:46.716 "uuid": "928bbebf-76e1-4bef-90d8-4b7580b08cea", 00:14:46.716 "strip_size_kb": 0, 00:14:46.716 "state": "online", 00:14:46.716 "raid_level": "raid1", 00:14:46.716 "superblock": true, 00:14:46.716 "num_base_bdevs": 2, 00:14:46.716 "num_base_bdevs_discovered": 2, 00:14:46.716 "num_base_bdevs_operational": 2, 00:14:46.716 "base_bdevs_list": [ 00:14:46.716 { 00:14:46.716 "name": "BaseBdev1", 00:14:46.716 "uuid": "50418298-1f72-46fe-895a-0f18d7784141", 00:14:46.716 "is_configured": true, 00:14:46.716 "data_offset": 2048, 00:14:46.716 "data_size": 63488 00:14:46.716 }, 00:14:46.716 { 00:14:46.716 "name": "BaseBdev2", 00:14:46.716 "uuid": "3f17eb6e-6d2f-4e1a-b5dc-45a1feca94c8", 00:14:46.716 "is_configured": true, 00:14:46.716 "data_offset": 2048, 00:14:46.716 "data_size": 63488 00:14:46.716 } 00:14:46.716 ] 00:14:46.716 } 00:14:46.716 } 00:14:46.716 }' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:46.716 BaseBdev2' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.716 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.716 [2024-12-09 22:56:02.508353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.974 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.974 "name": "Existed_Raid", 00:14:46.974 "uuid": "928bbebf-76e1-4bef-90d8-4b7580b08cea", 00:14:46.974 "strip_size_kb": 0, 00:14:46.974 "state": "online", 00:14:46.975 "raid_level": "raid1", 00:14:46.975 "superblock": true, 00:14:46.975 "num_base_bdevs": 2, 00:14:46.975 "num_base_bdevs_discovered": 1, 00:14:46.975 "num_base_bdevs_operational": 1, 00:14:46.975 "base_bdevs_list": [ 00:14:46.975 { 00:14:46.975 "name": null, 00:14:46.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.975 "is_configured": false, 00:14:46.975 "data_offset": 0, 00:14:46.975 "data_size": 63488 00:14:46.975 }, 00:14:46.975 { 00:14:46.975 "name": "BaseBdev2", 00:14:46.975 "uuid": "3f17eb6e-6d2f-4e1a-b5dc-45a1feca94c8", 00:14:46.975 "is_configured": true, 00:14:46.975 "data_offset": 2048, 00:14:46.975 "data_size": 63488 00:14:46.975 } 00:14:46.975 ] 00:14:46.975 }' 00:14:46.975 22:56:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.975 22:56:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.234 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:47.234 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.234 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:47.234 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.234 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.234 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.493 [2024-12-09 22:56:03.141169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.493 [2024-12-09 22:56:03.141328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.493 [2024-12-09 22:56:03.262083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.493 [2024-12-09 22:56:03.262284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.493 [2024-12-09 22:56:03.262341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63401 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63401 ']' 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63401 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.493 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63401 00:14:47.752 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.752 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.752 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63401' 00:14:47.752 killing process with pid 63401 00:14:47.752 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63401 00:14:47.752 [2024-12-09 22:56:03.363568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.752 22:56:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63401 00:14:47.752 [2024-12-09 22:56:03.383641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.127 22:56:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:49.127 00:14:49.127 real 0m5.625s 00:14:49.127 user 0m7.924s 00:14:49.127 sys 0m1.055s 00:14:49.127 ************************************ 00:14:49.127 END TEST raid_state_function_test_sb 00:14:49.127 ************************************ 00:14:49.127 22:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.127 22:56:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 22:56:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:49.127 22:56:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:49.127 22:56:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.127 22:56:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 ************************************ 00:14:49.127 START TEST raid_superblock_test 00:14:49.127 ************************************ 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63655 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63655 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63655 ']' 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.127 22:56:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 [2024-12-09 22:56:04.915645] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:49.127 [2024-12-09 22:56:04.915945] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63655 ] 00:14:49.385 [2024-12-09 22:56:05.090309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.642 [2024-12-09 22:56:05.249497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.901 [2024-12-09 22:56:05.510458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.901 [2024-12-09 22:56:05.510687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.159 malloc1 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.159 [2024-12-09 22:56:05.842298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:50.159 [2024-12-09 22:56:05.842420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.159 [2024-12-09 22:56:05.842493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:50.159 [2024-12-09 22:56:05.842537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.159 [2024-12-09 22:56:05.845216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.159 [2024-12-09 22:56:05.845291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:50.159 pt1 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.159 malloc2 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.159 [2024-12-09 22:56:05.910131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:50.159 [2024-12-09 22:56:05.910323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.159 [2024-12-09 22:56:05.910372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:50.159 [2024-12-09 22:56:05.910387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.159 [2024-12-09 22:56:05.913427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.159 [2024-12-09 22:56:05.913554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:50.159 pt2 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.159 [2024-12-09 22:56:05.922297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:50.159 [2024-12-09 22:56:05.924840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:50.159 [2024-12-09 22:56:05.925168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:50.159 [2024-12-09 22:56:05.925198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:50.159 [2024-12-09 22:56:05.925586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:50.159 [2024-12-09 22:56:05.925831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:50.159 [2024-12-09 22:56:05.925863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:50.159 [2024-12-09 22:56:05.926083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.159 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.160 "name": "raid_bdev1", 00:14:50.160 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:50.160 "strip_size_kb": 0, 00:14:50.160 "state": "online", 00:14:50.160 "raid_level": "raid1", 00:14:50.160 "superblock": true, 00:14:50.160 "num_base_bdevs": 2, 00:14:50.160 "num_base_bdevs_discovered": 2, 00:14:50.160 "num_base_bdevs_operational": 2, 00:14:50.160 "base_bdevs_list": [ 00:14:50.160 { 00:14:50.160 "name": "pt1", 00:14:50.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.160 "is_configured": true, 00:14:50.160 "data_offset": 2048, 00:14:50.160 "data_size": 63488 00:14:50.160 }, 00:14:50.160 { 00:14:50.160 "name": "pt2", 00:14:50.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.160 "is_configured": true, 00:14:50.160 "data_offset": 2048, 00:14:50.160 "data_size": 63488 00:14:50.160 } 00:14:50.160 ] 00:14:50.160 }' 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.160 22:56:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.726 [2024-12-09 22:56:06.441874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.726 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.726 "name": "raid_bdev1", 00:14:50.726 "aliases": [ 00:14:50.726 "5dc0876c-69ea-4da1-91dd-3370614bcb8f" 00:14:50.726 ], 00:14:50.726 "product_name": "Raid Volume", 00:14:50.726 "block_size": 512, 00:14:50.726 "num_blocks": 63488, 00:14:50.726 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:50.726 "assigned_rate_limits": { 00:14:50.727 "rw_ios_per_sec": 0, 00:14:50.727 "rw_mbytes_per_sec": 0, 00:14:50.727 "r_mbytes_per_sec": 0, 00:14:50.727 "w_mbytes_per_sec": 0 00:14:50.727 }, 00:14:50.727 "claimed": false, 00:14:50.727 "zoned": false, 00:14:50.727 "supported_io_types": { 00:14:50.727 "read": true, 00:14:50.727 "write": true, 00:14:50.727 "unmap": false, 00:14:50.727 "flush": false, 00:14:50.727 "reset": true, 00:14:50.727 "nvme_admin": false, 00:14:50.727 "nvme_io": false, 00:14:50.727 "nvme_io_md": false, 00:14:50.727 "write_zeroes": true, 00:14:50.727 "zcopy": false, 00:14:50.727 "get_zone_info": false, 00:14:50.727 "zone_management": false, 00:14:50.727 "zone_append": false, 00:14:50.727 "compare": false, 00:14:50.727 "compare_and_write": false, 00:14:50.727 "abort": false, 00:14:50.727 "seek_hole": false, 00:14:50.727 "seek_data": false, 00:14:50.727 "copy": false, 00:14:50.727 "nvme_iov_md": false 00:14:50.727 }, 00:14:50.727 "memory_domains": [ 00:14:50.727 { 00:14:50.727 "dma_device_id": "system", 00:14:50.727 "dma_device_type": 1 00:14:50.727 }, 00:14:50.727 { 00:14:50.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.727 "dma_device_type": 2 00:14:50.727 }, 00:14:50.727 { 00:14:50.727 "dma_device_id": "system", 00:14:50.727 "dma_device_type": 1 00:14:50.727 }, 00:14:50.727 { 00:14:50.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.727 "dma_device_type": 2 00:14:50.727 } 00:14:50.727 ], 00:14:50.727 "driver_specific": { 00:14:50.727 "raid": { 00:14:50.727 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:50.727 "strip_size_kb": 0, 00:14:50.727 "state": "online", 00:14:50.727 "raid_level": "raid1", 00:14:50.727 "superblock": true, 00:14:50.727 "num_base_bdevs": 2, 00:14:50.727 "num_base_bdevs_discovered": 2, 00:14:50.727 "num_base_bdevs_operational": 2, 00:14:50.727 "base_bdevs_list": [ 00:14:50.727 { 00:14:50.727 "name": "pt1", 00:14:50.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.727 "is_configured": true, 00:14:50.727 "data_offset": 2048, 00:14:50.727 "data_size": 63488 00:14:50.727 }, 00:14:50.727 { 00:14:50.727 "name": "pt2", 00:14:50.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.727 "is_configured": true, 00:14:50.727 "data_offset": 2048, 00:14:50.727 "data_size": 63488 00:14:50.727 } 00:14:50.727 ] 00:14:50.727 } 00:14:50.727 } 00:14:50.727 }' 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:50.727 pt2' 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.727 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 [2024-12-09 22:56:06.685413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5dc0876c-69ea-4da1-91dd-3370614bcb8f 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5dc0876c-69ea-4da1-91dd-3370614bcb8f ']' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 [2024-12-09 22:56:06.728920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.985 [2024-12-09 22:56:06.728951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.985 [2024-12-09 22:56:06.729063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.985 [2024-12-09 22:56:06.729139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.985 [2024-12-09 22:56:06.729154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:50.985 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.244 [2024-12-09 22:56:06.872779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:51.244 [2024-12-09 22:56:06.875489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:51.244 [2024-12-09 22:56:06.875629] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:51.244 [2024-12-09 22:56:06.875749] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:51.244 [2024-12-09 22:56:06.875817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.244 [2024-12-09 22:56:06.875854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:51.244 request: 00:14:51.244 { 00:14:51.244 "name": "raid_bdev1", 00:14:51.244 "raid_level": "raid1", 00:14:51.244 "base_bdevs": [ 00:14:51.244 "malloc1", 00:14:51.244 "malloc2" 00:14:51.244 ], 00:14:51.244 "superblock": false, 00:14:51.244 "method": "bdev_raid_create", 00:14:51.244 "req_id": 1 00:14:51.244 } 00:14:51.244 Got JSON-RPC error response 00:14:51.244 response: 00:14:51.244 { 00:14:51.244 "code": -17, 00:14:51.244 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:51.244 } 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.244 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.244 [2024-12-09 22:56:06.940754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.244 [2024-12-09 22:56:06.940989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.244 [2024-12-09 22:56:06.941052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:51.244 [2024-12-09 22:56:06.941118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.244 [2024-12-09 22:56:06.944820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.244 [2024-12-09 22:56:06.944955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.245 [2024-12-09 22:56:06.945141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:51.245 [2024-12-09 22:56:06.945301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:51.245 pt1 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.245 22:56:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.245 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.245 "name": "raid_bdev1", 00:14:51.245 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:51.245 "strip_size_kb": 0, 00:14:51.245 "state": "configuring", 00:14:51.245 "raid_level": "raid1", 00:14:51.245 "superblock": true, 00:14:51.245 "num_base_bdevs": 2, 00:14:51.245 "num_base_bdevs_discovered": 1, 00:14:51.245 "num_base_bdevs_operational": 2, 00:14:51.245 "base_bdevs_list": [ 00:14:51.245 { 00:14:51.245 "name": "pt1", 00:14:51.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.245 "is_configured": true, 00:14:51.245 "data_offset": 2048, 00:14:51.245 "data_size": 63488 00:14:51.245 }, 00:14:51.245 { 00:14:51.245 "name": null, 00:14:51.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.245 "is_configured": false, 00:14:51.245 "data_offset": 2048, 00:14:51.245 "data_size": 63488 00:14:51.245 } 00:14:51.245 ] 00:14:51.245 }' 00:14:51.245 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.245 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.813 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:51.813 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.814 [2024-12-09 22:56:07.424692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.814 [2024-12-09 22:56:07.424803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.814 [2024-12-09 22:56:07.424832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:51.814 [2024-12-09 22:56:07.424845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.814 [2024-12-09 22:56:07.425373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.814 [2024-12-09 22:56:07.425398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.814 [2024-12-09 22:56:07.425516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:51.814 [2024-12-09 22:56:07.425551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.814 [2024-12-09 22:56:07.425701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:51.814 [2024-12-09 22:56:07.425716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:51.814 [2024-12-09 22:56:07.426009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:51.814 [2024-12-09 22:56:07.426200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:51.814 [2024-12-09 22:56:07.426210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:51.814 [2024-12-09 22:56:07.426386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.814 pt2 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.814 "name": "raid_bdev1", 00:14:51.814 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:51.814 "strip_size_kb": 0, 00:14:51.814 "state": "online", 00:14:51.814 "raid_level": "raid1", 00:14:51.814 "superblock": true, 00:14:51.814 "num_base_bdevs": 2, 00:14:51.814 "num_base_bdevs_discovered": 2, 00:14:51.814 "num_base_bdevs_operational": 2, 00:14:51.814 "base_bdevs_list": [ 00:14:51.814 { 00:14:51.814 "name": "pt1", 00:14:51.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.814 "is_configured": true, 00:14:51.814 "data_offset": 2048, 00:14:51.814 "data_size": 63488 00:14:51.814 }, 00:14:51.814 { 00:14:51.814 "name": "pt2", 00:14:51.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.814 "is_configured": true, 00:14:51.814 "data_offset": 2048, 00:14:51.814 "data_size": 63488 00:14:51.814 } 00:14:51.814 ] 00:14:51.814 }' 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.814 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.074 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:52.074 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:52.074 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.074 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.074 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.074 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.334 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.334 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.334 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.334 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.334 [2024-12-09 22:56:07.936935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.334 22:56:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.334 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.334 "name": "raid_bdev1", 00:14:52.334 "aliases": [ 00:14:52.334 "5dc0876c-69ea-4da1-91dd-3370614bcb8f" 00:14:52.334 ], 00:14:52.334 "product_name": "Raid Volume", 00:14:52.334 "block_size": 512, 00:14:52.334 "num_blocks": 63488, 00:14:52.334 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:52.334 "assigned_rate_limits": { 00:14:52.334 "rw_ios_per_sec": 0, 00:14:52.334 "rw_mbytes_per_sec": 0, 00:14:52.334 "r_mbytes_per_sec": 0, 00:14:52.334 "w_mbytes_per_sec": 0 00:14:52.334 }, 00:14:52.334 "claimed": false, 00:14:52.334 "zoned": false, 00:14:52.334 "supported_io_types": { 00:14:52.334 "read": true, 00:14:52.334 "write": true, 00:14:52.334 "unmap": false, 00:14:52.334 "flush": false, 00:14:52.334 "reset": true, 00:14:52.334 "nvme_admin": false, 00:14:52.334 "nvme_io": false, 00:14:52.334 "nvme_io_md": false, 00:14:52.334 "write_zeroes": true, 00:14:52.334 "zcopy": false, 00:14:52.334 "get_zone_info": false, 00:14:52.334 "zone_management": false, 00:14:52.334 "zone_append": false, 00:14:52.334 "compare": false, 00:14:52.334 "compare_and_write": false, 00:14:52.334 "abort": false, 00:14:52.334 "seek_hole": false, 00:14:52.334 "seek_data": false, 00:14:52.334 "copy": false, 00:14:52.334 "nvme_iov_md": false 00:14:52.334 }, 00:14:52.334 "memory_domains": [ 00:14:52.334 { 00:14:52.334 "dma_device_id": "system", 00:14:52.334 "dma_device_type": 1 00:14:52.334 }, 00:14:52.334 { 00:14:52.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.334 "dma_device_type": 2 00:14:52.334 }, 00:14:52.334 { 00:14:52.334 "dma_device_id": "system", 00:14:52.334 "dma_device_type": 1 00:14:52.334 }, 00:14:52.334 { 00:14:52.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.334 "dma_device_type": 2 00:14:52.334 } 00:14:52.334 ], 00:14:52.334 "driver_specific": { 00:14:52.334 "raid": { 00:14:52.334 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:52.334 "strip_size_kb": 0, 00:14:52.334 "state": "online", 00:14:52.334 "raid_level": "raid1", 00:14:52.334 "superblock": true, 00:14:52.334 "num_base_bdevs": 2, 00:14:52.334 "num_base_bdevs_discovered": 2, 00:14:52.334 "num_base_bdevs_operational": 2, 00:14:52.335 "base_bdevs_list": [ 00:14:52.335 { 00:14:52.335 "name": "pt1", 00:14:52.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.335 "is_configured": true, 00:14:52.335 "data_offset": 2048, 00:14:52.335 "data_size": 63488 00:14:52.335 }, 00:14:52.335 { 00:14:52.335 "name": "pt2", 00:14:52.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.335 "is_configured": true, 00:14:52.335 "data_offset": 2048, 00:14:52.335 "data_size": 63488 00:14:52.335 } 00:14:52.335 ] 00:14:52.335 } 00:14:52.335 } 00:14:52.335 }' 00:14:52.335 22:56:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:52.335 pt2' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.335 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:52.335 [2024-12-09 22:56:08.188973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.594 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5dc0876c-69ea-4da1-91dd-3370614bcb8f '!=' 5dc0876c-69ea-4da1-91dd-3370614bcb8f ']' 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.595 [2024-12-09 22:56:08.240760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.595 "name": "raid_bdev1", 00:14:52.595 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:52.595 "strip_size_kb": 0, 00:14:52.595 "state": "online", 00:14:52.595 "raid_level": "raid1", 00:14:52.595 "superblock": true, 00:14:52.595 "num_base_bdevs": 2, 00:14:52.595 "num_base_bdevs_discovered": 1, 00:14:52.595 "num_base_bdevs_operational": 1, 00:14:52.595 "base_bdevs_list": [ 00:14:52.595 { 00:14:52.595 "name": null, 00:14:52.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.595 "is_configured": false, 00:14:52.595 "data_offset": 0, 00:14:52.595 "data_size": 63488 00:14:52.595 }, 00:14:52.595 { 00:14:52.595 "name": "pt2", 00:14:52.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.595 "is_configured": true, 00:14:52.595 "data_offset": 2048, 00:14:52.595 "data_size": 63488 00:14:52.595 } 00:14:52.595 ] 00:14:52.595 }' 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.595 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 [2024-12-09 22:56:08.732355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.168 [2024-12-09 22:56:08.732496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.168 [2024-12-09 22:56:08.732674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.168 [2024-12-09 22:56:08.732784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.168 [2024-12-09 22:56:08.732844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:53.168 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.169 [2024-12-09 22:56:08.792330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.169 [2024-12-09 22:56:08.792594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.169 [2024-12-09 22:56:08.792682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:53.169 [2024-12-09 22:56:08.792755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.169 [2024-12-09 22:56:08.796451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.169 [2024-12-09 22:56:08.796574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.169 [2024-12-09 22:56:08.796761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:53.169 [2024-12-09 22:56:08.796863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.169 [2024-12-09 22:56:08.797085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:53.169 [2024-12-09 22:56:08.797141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:53.169 pt2 00:14:53.169 [2024-12-09 22:56:08.797519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:53.169 [2024-12-09 22:56:08.797780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:53.169 [2024-12-09 22:56:08.797831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:53.169 [2024-12-09 22:56:08.798082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.169 "name": "raid_bdev1", 00:14:53.169 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:53.169 "strip_size_kb": 0, 00:14:53.169 "state": "online", 00:14:53.169 "raid_level": "raid1", 00:14:53.169 "superblock": true, 00:14:53.169 "num_base_bdevs": 2, 00:14:53.169 "num_base_bdevs_discovered": 1, 00:14:53.169 "num_base_bdevs_operational": 1, 00:14:53.169 "base_bdevs_list": [ 00:14:53.169 { 00:14:53.169 "name": null, 00:14:53.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.169 "is_configured": false, 00:14:53.169 "data_offset": 2048, 00:14:53.169 "data_size": 63488 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "name": "pt2", 00:14:53.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.169 "is_configured": true, 00:14:53.169 "data_offset": 2048, 00:14:53.169 "data_size": 63488 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 }' 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.169 22:56:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.434 [2024-12-09 22:56:09.256094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.434 [2024-12-09 22:56:09.256230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.434 [2024-12-09 22:56:09.256371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.434 [2024-12-09 22:56:09.256456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.434 [2024-12-09 22:56:09.256488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.434 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.701 [2024-12-09 22:56:09.316042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.701 [2024-12-09 22:56:09.316139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.701 [2024-12-09 22:56:09.316178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:53.701 [2024-12-09 22:56:09.316193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.701 [2024-12-09 22:56:09.319352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.701 [2024-12-09 22:56:09.319399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.701 [2024-12-09 22:56:09.319536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:53.701 [2024-12-09 22:56:09.319595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.701 [2024-12-09 22:56:09.319808] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:53.701 [2024-12-09 22:56:09.319829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.701 [2024-12-09 22:56:09.319850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:53.701 [2024-12-09 22:56:09.319928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.701 [2024-12-09 22:56:09.320024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:53.701 [2024-12-09 22:56:09.320035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:53.701 [2024-12-09 22:56:09.320362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:53.701 [2024-12-09 22:56:09.320585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:53.701 [2024-12-09 22:56:09.320604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:53.701 [2024-12-09 22:56:09.320859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.701 pt1 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.701 "name": "raid_bdev1", 00:14:53.701 "uuid": "5dc0876c-69ea-4da1-91dd-3370614bcb8f", 00:14:53.701 "strip_size_kb": 0, 00:14:53.701 "state": "online", 00:14:53.701 "raid_level": "raid1", 00:14:53.701 "superblock": true, 00:14:53.701 "num_base_bdevs": 2, 00:14:53.701 "num_base_bdevs_discovered": 1, 00:14:53.701 "num_base_bdevs_operational": 1, 00:14:53.701 "base_bdevs_list": [ 00:14:53.701 { 00:14:53.701 "name": null, 00:14:53.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.701 "is_configured": false, 00:14:53.701 "data_offset": 2048, 00:14:53.701 "data_size": 63488 00:14:53.701 }, 00:14:53.701 { 00:14:53.701 "name": "pt2", 00:14:53.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.701 "is_configured": true, 00:14:53.701 "data_offset": 2048, 00:14:53.701 "data_size": 63488 00:14:53.701 } 00:14:53.701 ] 00:14:53.701 }' 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.701 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.960 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:53.960 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:53.960 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.960 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.960 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.219 [2024-12-09 22:56:09.839632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5dc0876c-69ea-4da1-91dd-3370614bcb8f '!=' 5dc0876c-69ea-4da1-91dd-3370614bcb8f ']' 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63655 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63655 ']' 00:14:54.219 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63655 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63655 00:14:54.220 killing process with pid 63655 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63655' 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63655 00:14:54.220 [2024-12-09 22:56:09.919683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.220 [2024-12-09 22:56:09.919818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.220 22:56:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63655 00:14:54.220 [2024-12-09 22:56:09.919879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.220 [2024-12-09 22:56:09.919896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:54.479 [2024-12-09 22:56:10.178036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.859 ************************************ 00:14:55.859 END TEST raid_superblock_test 00:14:55.859 ************************************ 00:14:55.859 22:56:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:55.859 00:14:55.859 real 0m6.750s 00:14:55.859 user 0m9.935s 00:14:55.859 sys 0m1.326s 00:14:55.859 22:56:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.859 22:56:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.859 22:56:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:14:55.859 22:56:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:55.859 22:56:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.859 22:56:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.859 ************************************ 00:14:55.859 START TEST raid_read_error_test 00:14:55.859 ************************************ 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fDfYkjcqVr 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63994 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63994 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63994 ']' 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.859 22:56:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.118 [2024-12-09 22:56:11.751047] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:14:56.118 [2024-12-09 22:56:11.751309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63994 ] 00:14:56.118 [2024-12-09 22:56:11.935657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.377 [2024-12-09 22:56:12.087669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.637 [2024-12-09 22:56:12.353278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.637 [2024-12-09 22:56:12.353491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.896 BaseBdev1_malloc 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.896 true 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.896 [2024-12-09 22:56:12.719316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:56.896 [2024-12-09 22:56:12.719440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.896 [2024-12-09 22:56:12.719482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:56.896 [2024-12-09 22:56:12.719497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.896 [2024-12-09 22:56:12.722311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.896 [2024-12-09 22:56:12.722357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.896 BaseBdev1 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.896 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.156 BaseBdev2_malloc 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.156 true 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.156 [2024-12-09 22:56:12.797839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:57.156 [2024-12-09 22:56:12.797969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.156 [2024-12-09 22:56:12.797996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:57.156 [2024-12-09 22:56:12.798010] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.156 [2024-12-09 22:56:12.800930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.156 [2024-12-09 22:56:12.800978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.156 BaseBdev2 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.156 [2024-12-09 22:56:12.809999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.156 [2024-12-09 22:56:12.812468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.156 [2024-12-09 22:56:12.812710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.156 [2024-12-09 22:56:12.812728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:57.156 [2024-12-09 22:56:12.813052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:57.156 [2024-12-09 22:56:12.813294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.156 [2024-12-09 22:56:12.813307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:57.156 [2024-12-09 22:56:12.813521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.156 "name": "raid_bdev1", 00:14:57.156 "uuid": "5d7cd88b-369f-4b53-b173-e675e95d5b76", 00:14:57.156 "strip_size_kb": 0, 00:14:57.156 "state": "online", 00:14:57.156 "raid_level": "raid1", 00:14:57.156 "superblock": true, 00:14:57.156 "num_base_bdevs": 2, 00:14:57.156 "num_base_bdevs_discovered": 2, 00:14:57.156 "num_base_bdevs_operational": 2, 00:14:57.156 "base_bdevs_list": [ 00:14:57.156 { 00:14:57.156 "name": "BaseBdev1", 00:14:57.156 "uuid": "02dfb014-7376-5951-858a-3e78bfba1b3e", 00:14:57.156 "is_configured": true, 00:14:57.156 "data_offset": 2048, 00:14:57.156 "data_size": 63488 00:14:57.156 }, 00:14:57.156 { 00:14:57.156 "name": "BaseBdev2", 00:14:57.156 "uuid": "717cf901-2e53-5407-81c7-ec027c61c968", 00:14:57.156 "is_configured": true, 00:14:57.156 "data_offset": 2048, 00:14:57.156 "data_size": 63488 00:14:57.156 } 00:14:57.156 ] 00:14:57.156 }' 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.156 22:56:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.415 22:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:57.415 22:56:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:57.674 [2024-12-09 22:56:13.362739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.613 "name": "raid_bdev1", 00:14:58.613 "uuid": "5d7cd88b-369f-4b53-b173-e675e95d5b76", 00:14:58.613 "strip_size_kb": 0, 00:14:58.613 "state": "online", 00:14:58.613 "raid_level": "raid1", 00:14:58.613 "superblock": true, 00:14:58.613 "num_base_bdevs": 2, 00:14:58.613 "num_base_bdevs_discovered": 2, 00:14:58.613 "num_base_bdevs_operational": 2, 00:14:58.613 "base_bdevs_list": [ 00:14:58.613 { 00:14:58.613 "name": "BaseBdev1", 00:14:58.613 "uuid": "02dfb014-7376-5951-858a-3e78bfba1b3e", 00:14:58.613 "is_configured": true, 00:14:58.613 "data_offset": 2048, 00:14:58.613 "data_size": 63488 00:14:58.613 }, 00:14:58.613 { 00:14:58.613 "name": "BaseBdev2", 00:14:58.613 "uuid": "717cf901-2e53-5407-81c7-ec027c61c968", 00:14:58.613 "is_configured": true, 00:14:58.613 "data_offset": 2048, 00:14:58.613 "data_size": 63488 00:14:58.613 } 00:14:58.613 ] 00:14:58.613 }' 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.613 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.183 [2024-12-09 22:56:14.745098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.183 [2024-12-09 22:56:14.745242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.183 [2024-12-09 22:56:14.748207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.183 [2024-12-09 22:56:14.748262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.183 [2024-12-09 22:56:14.748359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.183 [2024-12-09 22:56:14.748373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:59.183 { 00:14:59.183 "results": [ 00:14:59.183 { 00:14:59.183 "job": "raid_bdev1", 00:14:59.183 "core_mask": "0x1", 00:14:59.183 "workload": "randrw", 00:14:59.183 "percentage": 50, 00:14:59.183 "status": "finished", 00:14:59.183 "queue_depth": 1, 00:14:59.183 "io_size": 131072, 00:14:59.183 "runtime": 1.382628, 00:14:59.183 "iops": 11805.77856082764, 00:14:59.183 "mibps": 1475.722320103455, 00:14:59.183 "io_failed": 0, 00:14:59.183 "io_timeout": 0, 00:14:59.183 "avg_latency_us": 81.66722755979386, 00:14:59.183 "min_latency_us": 25.823580786026202, 00:14:59.183 "max_latency_us": 1559.6995633187773 00:14:59.183 } 00:14:59.183 ], 00:14:59.183 "core_count": 1 00:14:59.183 } 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63994 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63994 ']' 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63994 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63994 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63994' 00:14:59.183 killing process with pid 63994 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63994 00:14:59.183 [2024-12-09 22:56:14.798521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.183 22:56:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63994 00:14:59.183 [2024-12-09 22:56:14.967195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fDfYkjcqVr 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:01.090 00:15:01.090 real 0m4.802s 00:15:01.090 user 0m5.601s 00:15:01.090 sys 0m0.731s 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.090 22:56:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.090 ************************************ 00:15:01.090 END TEST raid_read_error_test 00:15:01.090 ************************************ 00:15:01.090 22:56:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:15:01.090 22:56:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:01.090 22:56:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.090 22:56:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.090 ************************************ 00:15:01.090 START TEST raid_write_error_test 00:15:01.090 ************************************ 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d0m2Z5aO56 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64140 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64140 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64140 ']' 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.090 22:56:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.090 [2024-12-09 22:56:16.623120] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:01.090 [2024-12-09 22:56:16.623264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64140 ] 00:15:01.090 [2024-12-09 22:56:16.806880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.349 [2024-12-09 22:56:16.968325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.609 [2024-12-09 22:56:17.231220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.609 [2024-12-09 22:56:17.231326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 BaseBdev1_malloc 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 true 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 [2024-12-09 22:56:17.576750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:01.868 [2024-12-09 22:56:17.576827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.868 [2024-12-09 22:56:17.576853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:01.868 [2024-12-09 22:56:17.576867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.868 [2024-12-09 22:56:17.579665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.868 [2024-12-09 22:56:17.579722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.868 BaseBdev1 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 BaseBdev2_malloc 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 true 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 [2024-12-09 22:56:17.653377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:01.868 [2024-12-09 22:56:17.653483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.868 [2024-12-09 22:56:17.653509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:01.868 [2024-12-09 22:56:17.653523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.868 [2024-12-09 22:56:17.656554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.868 [2024-12-09 22:56:17.656651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.868 BaseBdev2 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.868 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 [2024-12-09 22:56:17.665453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.868 [2024-12-09 22:56:17.668020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.868 [2024-12-09 22:56:17.668342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:01.869 [2024-12-09 22:56:17.668403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:01.869 [2024-12-09 22:56:17.668758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:01.869 [2024-12-09 22:56:17.669025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:01.869 [2024-12-09 22:56:17.669076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:01.869 [2024-12-09 22:56:17.669376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.128 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.128 "name": "raid_bdev1", 00:15:02.128 "uuid": "8aabb14c-20e0-45b4-9340-d3422d3c4b2f", 00:15:02.128 "strip_size_kb": 0, 00:15:02.128 "state": "online", 00:15:02.128 "raid_level": "raid1", 00:15:02.128 "superblock": true, 00:15:02.128 "num_base_bdevs": 2, 00:15:02.128 "num_base_bdevs_discovered": 2, 00:15:02.128 "num_base_bdevs_operational": 2, 00:15:02.128 "base_bdevs_list": [ 00:15:02.128 { 00:15:02.128 "name": "BaseBdev1", 00:15:02.128 "uuid": "ffb5eed4-603b-53d0-acf4-4f41b28f24f2", 00:15:02.128 "is_configured": true, 00:15:02.128 "data_offset": 2048, 00:15:02.128 "data_size": 63488 00:15:02.128 }, 00:15:02.128 { 00:15:02.128 "name": "BaseBdev2", 00:15:02.128 "uuid": "18745040-6054-5ab1-9949-89ae8d9f5985", 00:15:02.128 "is_configured": true, 00:15:02.128 "data_offset": 2048, 00:15:02.128 "data_size": 63488 00:15:02.128 } 00:15:02.128 ] 00:15:02.128 }' 00:15:02.128 22:56:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.128 22:56:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.388 22:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:02.388 22:56:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:02.388 [2024-12-09 22:56:18.234159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 [2024-12-09 22:56:19.146013] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:03.327 [2024-12-09 22:56:19.146097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.327 [2024-12-09 22:56:19.146325] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.327 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.586 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.586 "name": "raid_bdev1", 00:15:03.586 "uuid": "8aabb14c-20e0-45b4-9340-d3422d3c4b2f", 00:15:03.586 "strip_size_kb": 0, 00:15:03.586 "state": "online", 00:15:03.586 "raid_level": "raid1", 00:15:03.586 "superblock": true, 00:15:03.587 "num_base_bdevs": 2, 00:15:03.587 "num_base_bdevs_discovered": 1, 00:15:03.587 "num_base_bdevs_operational": 1, 00:15:03.587 "base_bdevs_list": [ 00:15:03.587 { 00:15:03.587 "name": null, 00:15:03.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.587 "is_configured": false, 00:15:03.587 "data_offset": 0, 00:15:03.587 "data_size": 63488 00:15:03.587 }, 00:15:03.587 { 00:15:03.587 "name": "BaseBdev2", 00:15:03.587 "uuid": "18745040-6054-5ab1-9949-89ae8d9f5985", 00:15:03.587 "is_configured": true, 00:15:03.587 "data_offset": 2048, 00:15:03.587 "data_size": 63488 00:15:03.587 } 00:15:03.587 ] 00:15:03.587 }' 00:15:03.587 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.587 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.846 [2024-12-09 22:56:19.599565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.846 [2024-12-09 22:56:19.599700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.846 [2024-12-09 22:56:19.602850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.846 [2024-12-09 22:56:19.602945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.846 [2024-12-09 22:56:19.603030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.846 [2024-12-09 22:56:19.603080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:03.846 { 00:15:03.846 "results": [ 00:15:03.846 { 00:15:03.846 "job": "raid_bdev1", 00:15:03.846 "core_mask": "0x1", 00:15:03.846 "workload": "randrw", 00:15:03.846 "percentage": 50, 00:15:03.846 "status": "finished", 00:15:03.846 "queue_depth": 1, 00:15:03.846 "io_size": 131072, 00:15:03.846 "runtime": 1.365754, 00:15:03.846 "iops": 14532.631791669657, 00:15:03.846 "mibps": 1816.5789739587071, 00:15:03.846 "io_failed": 0, 00:15:03.846 "io_timeout": 0, 00:15:03.846 "avg_latency_us": 65.7843178461988, 00:15:03.846 "min_latency_us": 24.258515283842794, 00:15:03.846 "max_latency_us": 1459.5353711790392 00:15:03.846 } 00:15:03.846 ], 00:15:03.846 "core_count": 1 00:15:03.846 } 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64140 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64140 ']' 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64140 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64140 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.846 killing process with pid 64140 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64140' 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64140 00:15:03.846 [2024-12-09 22:56:19.651583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.846 22:56:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64140 00:15:04.106 [2024-12-09 22:56:19.809813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d0m2Z5aO56 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:05.482 00:15:05.482 real 0m4.723s 00:15:05.482 user 0m5.525s 00:15:05.482 sys 0m0.692s 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.482 ************************************ 00:15:05.482 END TEST raid_write_error_test 00:15:05.482 ************************************ 00:15:05.482 22:56:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.482 22:56:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:15:05.482 22:56:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:05.482 22:56:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:05.483 22:56:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:05.483 22:56:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.483 22:56:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.483 ************************************ 00:15:05.483 START TEST raid_state_function_test 00:15:05.483 ************************************ 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64283 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64283' 00:15:05.483 Process raid pid: 64283 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64283 00:15:05.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64283 ']' 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.483 22:56:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.742 [2024-12-09 22:56:21.406407] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:05.742 [2024-12-09 22:56:21.406573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.742 [2024-12-09 22:56:21.590589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.001 [2024-12-09 22:56:21.742182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.260 [2024-12-09 22:56:21.999344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.260 [2024-12-09 22:56:21.999411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 [2024-12-09 22:56:22.330800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.543 [2024-12-09 22:56:22.330879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.543 [2024-12-09 22:56:22.330891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.543 [2024-12-09 22:56:22.330902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.543 [2024-12-09 22:56:22.330909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.543 [2024-12-09 22:56:22.330919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.543 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.544 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.544 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.544 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.544 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.544 "name": "Existed_Raid", 00:15:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.544 "strip_size_kb": 64, 00:15:06.544 "state": "configuring", 00:15:06.544 "raid_level": "raid0", 00:15:06.544 "superblock": false, 00:15:06.544 "num_base_bdevs": 3, 00:15:06.544 "num_base_bdevs_discovered": 0, 00:15:06.544 "num_base_bdevs_operational": 3, 00:15:06.544 "base_bdevs_list": [ 00:15:06.544 { 00:15:06.544 "name": "BaseBdev1", 00:15:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.544 "is_configured": false, 00:15:06.544 "data_offset": 0, 00:15:06.544 "data_size": 0 00:15:06.544 }, 00:15:06.544 { 00:15:06.544 "name": "BaseBdev2", 00:15:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.544 "is_configured": false, 00:15:06.544 "data_offset": 0, 00:15:06.544 "data_size": 0 00:15:06.544 }, 00:15:06.544 { 00:15:06.544 "name": "BaseBdev3", 00:15:06.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.544 "is_configured": false, 00:15:06.544 "data_offset": 0, 00:15:06.544 "data_size": 0 00:15:06.544 } 00:15:06.544 ] 00:15:06.544 }' 00:15:06.544 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.544 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.130 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.130 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.130 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.130 [2024-12-09 22:56:22.777987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.130 [2024-12-09 22:56:22.778123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:07.130 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.130 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:07.130 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.131 [2024-12-09 22:56:22.789958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.131 [2024-12-09 22:56:22.790086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.131 [2024-12-09 22:56:22.790123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.131 [2024-12-09 22:56:22.790152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.131 [2024-12-09 22:56:22.790208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.131 [2024-12-09 22:56:22.790236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.131 [2024-12-09 22:56:22.847254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.131 BaseBdev1 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.131 [ 00:15:07.131 { 00:15:07.131 "name": "BaseBdev1", 00:15:07.131 "aliases": [ 00:15:07.131 "973b6412-41f1-423f-92fb-1542942a78c3" 00:15:07.131 ], 00:15:07.131 "product_name": "Malloc disk", 00:15:07.131 "block_size": 512, 00:15:07.131 "num_blocks": 65536, 00:15:07.131 "uuid": "973b6412-41f1-423f-92fb-1542942a78c3", 00:15:07.131 "assigned_rate_limits": { 00:15:07.131 "rw_ios_per_sec": 0, 00:15:07.131 "rw_mbytes_per_sec": 0, 00:15:07.131 "r_mbytes_per_sec": 0, 00:15:07.131 "w_mbytes_per_sec": 0 00:15:07.131 }, 00:15:07.131 "claimed": true, 00:15:07.131 "claim_type": "exclusive_write", 00:15:07.131 "zoned": false, 00:15:07.131 "supported_io_types": { 00:15:07.131 "read": true, 00:15:07.131 "write": true, 00:15:07.131 "unmap": true, 00:15:07.131 "flush": true, 00:15:07.131 "reset": true, 00:15:07.131 "nvme_admin": false, 00:15:07.131 "nvme_io": false, 00:15:07.131 "nvme_io_md": false, 00:15:07.131 "write_zeroes": true, 00:15:07.131 "zcopy": true, 00:15:07.131 "get_zone_info": false, 00:15:07.131 "zone_management": false, 00:15:07.131 "zone_append": false, 00:15:07.131 "compare": false, 00:15:07.131 "compare_and_write": false, 00:15:07.131 "abort": true, 00:15:07.131 "seek_hole": false, 00:15:07.131 "seek_data": false, 00:15:07.131 "copy": true, 00:15:07.131 "nvme_iov_md": false 00:15:07.131 }, 00:15:07.131 "memory_domains": [ 00:15:07.131 { 00:15:07.131 "dma_device_id": "system", 00:15:07.131 "dma_device_type": 1 00:15:07.131 }, 00:15:07.131 { 00:15:07.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.131 "dma_device_type": 2 00:15:07.131 } 00:15:07.131 ], 00:15:07.131 "driver_specific": {} 00:15:07.131 } 00:15:07.131 ] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.131 "name": "Existed_Raid", 00:15:07.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.131 "strip_size_kb": 64, 00:15:07.131 "state": "configuring", 00:15:07.131 "raid_level": "raid0", 00:15:07.131 "superblock": false, 00:15:07.131 "num_base_bdevs": 3, 00:15:07.131 "num_base_bdevs_discovered": 1, 00:15:07.131 "num_base_bdevs_operational": 3, 00:15:07.131 "base_bdevs_list": [ 00:15:07.131 { 00:15:07.131 "name": "BaseBdev1", 00:15:07.131 "uuid": "973b6412-41f1-423f-92fb-1542942a78c3", 00:15:07.131 "is_configured": true, 00:15:07.131 "data_offset": 0, 00:15:07.131 "data_size": 65536 00:15:07.131 }, 00:15:07.131 { 00:15:07.131 "name": "BaseBdev2", 00:15:07.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.131 "is_configured": false, 00:15:07.131 "data_offset": 0, 00:15:07.131 "data_size": 0 00:15:07.131 }, 00:15:07.131 { 00:15:07.131 "name": "BaseBdev3", 00:15:07.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.131 "is_configured": false, 00:15:07.131 "data_offset": 0, 00:15:07.131 "data_size": 0 00:15:07.131 } 00:15:07.131 ] 00:15:07.131 }' 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.131 22:56:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.701 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.701 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.701 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.701 [2024-12-09 22:56:23.378526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.701 [2024-12-09 22:56:23.378704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:07.701 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.702 [2024-12-09 22:56:23.390559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.702 [2024-12-09 22:56:23.393011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.702 [2024-12-09 22:56:23.393107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.702 [2024-12-09 22:56:23.393144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.702 [2024-12-09 22:56:23.393171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.702 "name": "Existed_Raid", 00:15:07.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.702 "strip_size_kb": 64, 00:15:07.702 "state": "configuring", 00:15:07.702 "raid_level": "raid0", 00:15:07.702 "superblock": false, 00:15:07.702 "num_base_bdevs": 3, 00:15:07.702 "num_base_bdevs_discovered": 1, 00:15:07.702 "num_base_bdevs_operational": 3, 00:15:07.702 "base_bdevs_list": [ 00:15:07.702 { 00:15:07.702 "name": "BaseBdev1", 00:15:07.702 "uuid": "973b6412-41f1-423f-92fb-1542942a78c3", 00:15:07.702 "is_configured": true, 00:15:07.702 "data_offset": 0, 00:15:07.702 "data_size": 65536 00:15:07.702 }, 00:15:07.702 { 00:15:07.702 "name": "BaseBdev2", 00:15:07.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.702 "is_configured": false, 00:15:07.702 "data_offset": 0, 00:15:07.702 "data_size": 0 00:15:07.702 }, 00:15:07.702 { 00:15:07.702 "name": "BaseBdev3", 00:15:07.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.702 "is_configured": false, 00:15:07.702 "data_offset": 0, 00:15:07.702 "data_size": 0 00:15:07.702 } 00:15:07.702 ] 00:15:07.702 }' 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.702 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.270 [2024-12-09 22:56:23.961197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.270 BaseBdev2 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.270 [ 00:15:08.270 { 00:15:08.270 "name": "BaseBdev2", 00:15:08.270 "aliases": [ 00:15:08.270 "3900364d-b78d-44c3-8dcc-ad98b360bfe9" 00:15:08.270 ], 00:15:08.270 "product_name": "Malloc disk", 00:15:08.270 "block_size": 512, 00:15:08.270 "num_blocks": 65536, 00:15:08.270 "uuid": "3900364d-b78d-44c3-8dcc-ad98b360bfe9", 00:15:08.270 "assigned_rate_limits": { 00:15:08.270 "rw_ios_per_sec": 0, 00:15:08.270 "rw_mbytes_per_sec": 0, 00:15:08.270 "r_mbytes_per_sec": 0, 00:15:08.270 "w_mbytes_per_sec": 0 00:15:08.270 }, 00:15:08.270 "claimed": true, 00:15:08.270 "claim_type": "exclusive_write", 00:15:08.270 "zoned": false, 00:15:08.270 "supported_io_types": { 00:15:08.270 "read": true, 00:15:08.270 "write": true, 00:15:08.270 "unmap": true, 00:15:08.270 "flush": true, 00:15:08.270 "reset": true, 00:15:08.270 "nvme_admin": false, 00:15:08.270 "nvme_io": false, 00:15:08.270 "nvme_io_md": false, 00:15:08.270 "write_zeroes": true, 00:15:08.270 "zcopy": true, 00:15:08.270 "get_zone_info": false, 00:15:08.270 "zone_management": false, 00:15:08.270 "zone_append": false, 00:15:08.270 "compare": false, 00:15:08.270 "compare_and_write": false, 00:15:08.270 "abort": true, 00:15:08.270 "seek_hole": false, 00:15:08.270 "seek_data": false, 00:15:08.270 "copy": true, 00:15:08.270 "nvme_iov_md": false 00:15:08.270 }, 00:15:08.270 "memory_domains": [ 00:15:08.270 { 00:15:08.270 "dma_device_id": "system", 00:15:08.270 "dma_device_type": 1 00:15:08.270 }, 00:15:08.270 { 00:15:08.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.270 "dma_device_type": 2 00:15:08.270 } 00:15:08.270 ], 00:15:08.270 "driver_specific": {} 00:15:08.270 } 00:15:08.270 ] 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.270 22:56:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:08.270 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.270 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.271 "name": "Existed_Raid", 00:15:08.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.271 "strip_size_kb": 64, 00:15:08.271 "state": "configuring", 00:15:08.271 "raid_level": "raid0", 00:15:08.271 "superblock": false, 00:15:08.271 "num_base_bdevs": 3, 00:15:08.271 "num_base_bdevs_discovered": 2, 00:15:08.271 "num_base_bdevs_operational": 3, 00:15:08.271 "base_bdevs_list": [ 00:15:08.271 { 00:15:08.271 "name": "BaseBdev1", 00:15:08.271 "uuid": "973b6412-41f1-423f-92fb-1542942a78c3", 00:15:08.271 "is_configured": true, 00:15:08.271 "data_offset": 0, 00:15:08.271 "data_size": 65536 00:15:08.271 }, 00:15:08.271 { 00:15:08.271 "name": "BaseBdev2", 00:15:08.271 "uuid": "3900364d-b78d-44c3-8dcc-ad98b360bfe9", 00:15:08.271 "is_configured": true, 00:15:08.271 "data_offset": 0, 00:15:08.271 "data_size": 65536 00:15:08.271 }, 00:15:08.271 { 00:15:08.271 "name": "BaseBdev3", 00:15:08.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.271 "is_configured": false, 00:15:08.271 "data_offset": 0, 00:15:08.271 "data_size": 0 00:15:08.271 } 00:15:08.271 ] 00:15:08.271 }' 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.271 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.839 [2024-12-09 22:56:24.549414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.839 [2024-12-09 22:56:24.549502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:08.839 [2024-12-09 22:56:24.549521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:08.839 [2024-12-09 22:56:24.549869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.839 [2024-12-09 22:56:24.550080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:08.839 [2024-12-09 22:56:24.550093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:08.839 [2024-12-09 22:56:24.550588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.839 BaseBdev3 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.839 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.840 [ 00:15:08.840 { 00:15:08.840 "name": "BaseBdev3", 00:15:08.840 "aliases": [ 00:15:08.840 "c42f0b56-889d-4fec-b7a5-0342df4416da" 00:15:08.840 ], 00:15:08.840 "product_name": "Malloc disk", 00:15:08.840 "block_size": 512, 00:15:08.840 "num_blocks": 65536, 00:15:08.840 "uuid": "c42f0b56-889d-4fec-b7a5-0342df4416da", 00:15:08.840 "assigned_rate_limits": { 00:15:08.840 "rw_ios_per_sec": 0, 00:15:08.840 "rw_mbytes_per_sec": 0, 00:15:08.840 "r_mbytes_per_sec": 0, 00:15:08.840 "w_mbytes_per_sec": 0 00:15:08.840 }, 00:15:08.840 "claimed": true, 00:15:08.840 "claim_type": "exclusive_write", 00:15:08.840 "zoned": false, 00:15:08.840 "supported_io_types": { 00:15:08.840 "read": true, 00:15:08.840 "write": true, 00:15:08.840 "unmap": true, 00:15:08.840 "flush": true, 00:15:08.840 "reset": true, 00:15:08.840 "nvme_admin": false, 00:15:08.840 "nvme_io": false, 00:15:08.840 "nvme_io_md": false, 00:15:08.840 "write_zeroes": true, 00:15:08.840 "zcopy": true, 00:15:08.840 "get_zone_info": false, 00:15:08.840 "zone_management": false, 00:15:08.840 "zone_append": false, 00:15:08.840 "compare": false, 00:15:08.840 "compare_and_write": false, 00:15:08.840 "abort": true, 00:15:08.840 "seek_hole": false, 00:15:08.840 "seek_data": false, 00:15:08.840 "copy": true, 00:15:08.840 "nvme_iov_md": false 00:15:08.840 }, 00:15:08.840 "memory_domains": [ 00:15:08.840 { 00:15:08.840 "dma_device_id": "system", 00:15:08.840 "dma_device_type": 1 00:15:08.840 }, 00:15:08.840 { 00:15:08.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.840 "dma_device_type": 2 00:15:08.840 } 00:15:08.840 ], 00:15:08.840 "driver_specific": {} 00:15:08.840 } 00:15:08.840 ] 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.840 "name": "Existed_Raid", 00:15:08.840 "uuid": "fe508db4-5bc2-4965-ad05-a8da65f673c3", 00:15:08.840 "strip_size_kb": 64, 00:15:08.840 "state": "online", 00:15:08.840 "raid_level": "raid0", 00:15:08.840 "superblock": false, 00:15:08.840 "num_base_bdevs": 3, 00:15:08.840 "num_base_bdevs_discovered": 3, 00:15:08.840 "num_base_bdevs_operational": 3, 00:15:08.840 "base_bdevs_list": [ 00:15:08.840 { 00:15:08.840 "name": "BaseBdev1", 00:15:08.840 "uuid": "973b6412-41f1-423f-92fb-1542942a78c3", 00:15:08.840 "is_configured": true, 00:15:08.840 "data_offset": 0, 00:15:08.840 "data_size": 65536 00:15:08.840 }, 00:15:08.840 { 00:15:08.840 "name": "BaseBdev2", 00:15:08.840 "uuid": "3900364d-b78d-44c3-8dcc-ad98b360bfe9", 00:15:08.840 "is_configured": true, 00:15:08.840 "data_offset": 0, 00:15:08.840 "data_size": 65536 00:15:08.840 }, 00:15:08.840 { 00:15:08.840 "name": "BaseBdev3", 00:15:08.840 "uuid": "c42f0b56-889d-4fec-b7a5-0342df4416da", 00:15:08.840 "is_configured": true, 00:15:08.840 "data_offset": 0, 00:15:08.840 "data_size": 65536 00:15:08.840 } 00:15:08.840 ] 00:15:08.840 }' 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.840 22:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.410 [2024-12-09 22:56:25.081063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.410 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.410 "name": "Existed_Raid", 00:15:09.410 "aliases": [ 00:15:09.410 "fe508db4-5bc2-4965-ad05-a8da65f673c3" 00:15:09.410 ], 00:15:09.410 "product_name": "Raid Volume", 00:15:09.410 "block_size": 512, 00:15:09.410 "num_blocks": 196608, 00:15:09.410 "uuid": "fe508db4-5bc2-4965-ad05-a8da65f673c3", 00:15:09.410 "assigned_rate_limits": { 00:15:09.410 "rw_ios_per_sec": 0, 00:15:09.410 "rw_mbytes_per_sec": 0, 00:15:09.410 "r_mbytes_per_sec": 0, 00:15:09.410 "w_mbytes_per_sec": 0 00:15:09.410 }, 00:15:09.410 "claimed": false, 00:15:09.410 "zoned": false, 00:15:09.410 "supported_io_types": { 00:15:09.410 "read": true, 00:15:09.410 "write": true, 00:15:09.410 "unmap": true, 00:15:09.410 "flush": true, 00:15:09.410 "reset": true, 00:15:09.410 "nvme_admin": false, 00:15:09.410 "nvme_io": false, 00:15:09.410 "nvme_io_md": false, 00:15:09.410 "write_zeroes": true, 00:15:09.410 "zcopy": false, 00:15:09.410 "get_zone_info": false, 00:15:09.410 "zone_management": false, 00:15:09.410 "zone_append": false, 00:15:09.410 "compare": false, 00:15:09.410 "compare_and_write": false, 00:15:09.410 "abort": false, 00:15:09.410 "seek_hole": false, 00:15:09.410 "seek_data": false, 00:15:09.410 "copy": false, 00:15:09.410 "nvme_iov_md": false 00:15:09.410 }, 00:15:09.410 "memory_domains": [ 00:15:09.410 { 00:15:09.410 "dma_device_id": "system", 00:15:09.410 "dma_device_type": 1 00:15:09.410 }, 00:15:09.410 { 00:15:09.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.410 "dma_device_type": 2 00:15:09.410 }, 00:15:09.410 { 00:15:09.410 "dma_device_id": "system", 00:15:09.410 "dma_device_type": 1 00:15:09.410 }, 00:15:09.410 { 00:15:09.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.410 "dma_device_type": 2 00:15:09.410 }, 00:15:09.410 { 00:15:09.410 "dma_device_id": "system", 00:15:09.410 "dma_device_type": 1 00:15:09.410 }, 00:15:09.410 { 00:15:09.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.410 "dma_device_type": 2 00:15:09.410 } 00:15:09.410 ], 00:15:09.410 "driver_specific": { 00:15:09.410 "raid": { 00:15:09.410 "uuid": "fe508db4-5bc2-4965-ad05-a8da65f673c3", 00:15:09.410 "strip_size_kb": 64, 00:15:09.410 "state": "online", 00:15:09.410 "raid_level": "raid0", 00:15:09.410 "superblock": false, 00:15:09.410 "num_base_bdevs": 3, 00:15:09.410 "num_base_bdevs_discovered": 3, 00:15:09.410 "num_base_bdevs_operational": 3, 00:15:09.410 "base_bdevs_list": [ 00:15:09.410 { 00:15:09.410 "name": "BaseBdev1", 00:15:09.410 "uuid": "973b6412-41f1-423f-92fb-1542942a78c3", 00:15:09.411 "is_configured": true, 00:15:09.411 "data_offset": 0, 00:15:09.411 "data_size": 65536 00:15:09.411 }, 00:15:09.411 { 00:15:09.411 "name": "BaseBdev2", 00:15:09.411 "uuid": "3900364d-b78d-44c3-8dcc-ad98b360bfe9", 00:15:09.411 "is_configured": true, 00:15:09.411 "data_offset": 0, 00:15:09.411 "data_size": 65536 00:15:09.411 }, 00:15:09.411 { 00:15:09.411 "name": "BaseBdev3", 00:15:09.411 "uuid": "c42f0b56-889d-4fec-b7a5-0342df4416da", 00:15:09.411 "is_configured": true, 00:15:09.411 "data_offset": 0, 00:15:09.411 "data_size": 65536 00:15:09.411 } 00:15:09.411 ] 00:15:09.411 } 00:15:09.411 } 00:15:09.411 }' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:09.411 BaseBdev2 00:15:09.411 BaseBdev3' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.411 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.670 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.671 [2024-12-09 22:56:25.356332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.671 [2024-12-09 22:56:25.356489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.671 [2024-12-09 22:56:25.356580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.671 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.930 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.930 "name": "Existed_Raid", 00:15:09.930 "uuid": "fe508db4-5bc2-4965-ad05-a8da65f673c3", 00:15:09.930 "strip_size_kb": 64, 00:15:09.930 "state": "offline", 00:15:09.930 "raid_level": "raid0", 00:15:09.930 "superblock": false, 00:15:09.930 "num_base_bdevs": 3, 00:15:09.930 "num_base_bdevs_discovered": 2, 00:15:09.930 "num_base_bdevs_operational": 2, 00:15:09.930 "base_bdevs_list": [ 00:15:09.930 { 00:15:09.930 "name": null, 00:15:09.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.930 "is_configured": false, 00:15:09.930 "data_offset": 0, 00:15:09.930 "data_size": 65536 00:15:09.930 }, 00:15:09.930 { 00:15:09.930 "name": "BaseBdev2", 00:15:09.930 "uuid": "3900364d-b78d-44c3-8dcc-ad98b360bfe9", 00:15:09.930 "is_configured": true, 00:15:09.930 "data_offset": 0, 00:15:09.930 "data_size": 65536 00:15:09.930 }, 00:15:09.930 { 00:15:09.930 "name": "BaseBdev3", 00:15:09.930 "uuid": "c42f0b56-889d-4fec-b7a5-0342df4416da", 00:15:09.930 "is_configured": true, 00:15:09.930 "data_offset": 0, 00:15:09.930 "data_size": 65536 00:15:09.930 } 00:15:09.930 ] 00:15:09.930 }' 00:15:09.930 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.930 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.190 22:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.190 [2024-12-09 22:56:25.986022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.449 [2024-12-09 22:56:26.163110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:10.449 [2024-12-09 22:56:26.163278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:10.449 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.450 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.450 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.450 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 BaseBdev2 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 [ 00:15:10.710 { 00:15:10.710 "name": "BaseBdev2", 00:15:10.710 "aliases": [ 00:15:10.710 "d5188895-a65d-4f4d-8e50-c078a7ee5a89" 00:15:10.710 ], 00:15:10.710 "product_name": "Malloc disk", 00:15:10.710 "block_size": 512, 00:15:10.710 "num_blocks": 65536, 00:15:10.710 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:10.710 "assigned_rate_limits": { 00:15:10.710 "rw_ios_per_sec": 0, 00:15:10.710 "rw_mbytes_per_sec": 0, 00:15:10.710 "r_mbytes_per_sec": 0, 00:15:10.710 "w_mbytes_per_sec": 0 00:15:10.710 }, 00:15:10.710 "claimed": false, 00:15:10.710 "zoned": false, 00:15:10.710 "supported_io_types": { 00:15:10.710 "read": true, 00:15:10.710 "write": true, 00:15:10.710 "unmap": true, 00:15:10.710 "flush": true, 00:15:10.710 "reset": true, 00:15:10.710 "nvme_admin": false, 00:15:10.710 "nvme_io": false, 00:15:10.710 "nvme_io_md": false, 00:15:10.710 "write_zeroes": true, 00:15:10.710 "zcopy": true, 00:15:10.710 "get_zone_info": false, 00:15:10.710 "zone_management": false, 00:15:10.710 "zone_append": false, 00:15:10.710 "compare": false, 00:15:10.710 "compare_and_write": false, 00:15:10.710 "abort": true, 00:15:10.710 "seek_hole": false, 00:15:10.710 "seek_data": false, 00:15:10.710 "copy": true, 00:15:10.710 "nvme_iov_md": false 00:15:10.710 }, 00:15:10.710 "memory_domains": [ 00:15:10.710 { 00:15:10.710 "dma_device_id": "system", 00:15:10.710 "dma_device_type": 1 00:15:10.710 }, 00:15:10.710 { 00:15:10.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.710 "dma_device_type": 2 00:15:10.710 } 00:15:10.710 ], 00:15:10.710 "driver_specific": {} 00:15:10.710 } 00:15:10.710 ] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 BaseBdev3 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 [ 00:15:10.710 { 00:15:10.710 "name": "BaseBdev3", 00:15:10.710 "aliases": [ 00:15:10.710 "06db360f-4e4a-41f9-a85f-f897c2d39154" 00:15:10.710 ], 00:15:10.710 "product_name": "Malloc disk", 00:15:10.710 "block_size": 512, 00:15:10.710 "num_blocks": 65536, 00:15:10.710 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:10.710 "assigned_rate_limits": { 00:15:10.710 "rw_ios_per_sec": 0, 00:15:10.710 "rw_mbytes_per_sec": 0, 00:15:10.710 "r_mbytes_per_sec": 0, 00:15:10.710 "w_mbytes_per_sec": 0 00:15:10.710 }, 00:15:10.710 "claimed": false, 00:15:10.710 "zoned": false, 00:15:10.710 "supported_io_types": { 00:15:10.710 "read": true, 00:15:10.710 "write": true, 00:15:10.710 "unmap": true, 00:15:10.710 "flush": true, 00:15:10.710 "reset": true, 00:15:10.710 "nvme_admin": false, 00:15:10.710 "nvme_io": false, 00:15:10.710 "nvme_io_md": false, 00:15:10.710 "write_zeroes": true, 00:15:10.710 "zcopy": true, 00:15:10.710 "get_zone_info": false, 00:15:10.710 "zone_management": false, 00:15:10.710 "zone_append": false, 00:15:10.710 "compare": false, 00:15:10.710 "compare_and_write": false, 00:15:10.710 "abort": true, 00:15:10.710 "seek_hole": false, 00:15:10.710 "seek_data": false, 00:15:10.710 "copy": true, 00:15:10.710 "nvme_iov_md": false 00:15:10.710 }, 00:15:10.710 "memory_domains": [ 00:15:10.710 { 00:15:10.710 "dma_device_id": "system", 00:15:10.710 "dma_device_type": 1 00:15:10.710 }, 00:15:10.710 { 00:15:10.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.710 "dma_device_type": 2 00:15:10.710 } 00:15:10.710 ], 00:15:10.710 "driver_specific": {} 00:15:10.710 } 00:15:10.710 ] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.710 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.710 [2024-12-09 22:56:26.528002] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.710 [2024-12-09 22:56:26.528192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.710 [2024-12-09 22:56:26.528277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.711 [2024-12-09 22:56:26.530811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.711 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.972 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.972 "name": "Existed_Raid", 00:15:10.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.972 "strip_size_kb": 64, 00:15:10.972 "state": "configuring", 00:15:10.972 "raid_level": "raid0", 00:15:10.972 "superblock": false, 00:15:10.972 "num_base_bdevs": 3, 00:15:10.972 "num_base_bdevs_discovered": 2, 00:15:10.972 "num_base_bdevs_operational": 3, 00:15:10.972 "base_bdevs_list": [ 00:15:10.972 { 00:15:10.972 "name": "BaseBdev1", 00:15:10.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.972 "is_configured": false, 00:15:10.972 "data_offset": 0, 00:15:10.972 "data_size": 0 00:15:10.972 }, 00:15:10.972 { 00:15:10.972 "name": "BaseBdev2", 00:15:10.972 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:10.972 "is_configured": true, 00:15:10.972 "data_offset": 0, 00:15:10.972 "data_size": 65536 00:15:10.972 }, 00:15:10.972 { 00:15:10.972 "name": "BaseBdev3", 00:15:10.972 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:10.972 "is_configured": true, 00:15:10.972 "data_offset": 0, 00:15:10.972 "data_size": 65536 00:15:10.972 } 00:15:10.972 ] 00:15:10.972 }' 00:15:10.972 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.972 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.231 [2024-12-09 22:56:26.991206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.231 22:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.231 "name": "Existed_Raid", 00:15:11.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.231 "strip_size_kb": 64, 00:15:11.231 "state": "configuring", 00:15:11.231 "raid_level": "raid0", 00:15:11.231 "superblock": false, 00:15:11.231 "num_base_bdevs": 3, 00:15:11.231 "num_base_bdevs_discovered": 1, 00:15:11.231 "num_base_bdevs_operational": 3, 00:15:11.231 "base_bdevs_list": [ 00:15:11.231 { 00:15:11.231 "name": "BaseBdev1", 00:15:11.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.231 "is_configured": false, 00:15:11.231 "data_offset": 0, 00:15:11.231 "data_size": 0 00:15:11.231 }, 00:15:11.231 { 00:15:11.231 "name": null, 00:15:11.231 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:11.231 "is_configured": false, 00:15:11.231 "data_offset": 0, 00:15:11.231 "data_size": 65536 00:15:11.231 }, 00:15:11.231 { 00:15:11.231 "name": "BaseBdev3", 00:15:11.231 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:11.231 "is_configured": true, 00:15:11.231 "data_offset": 0, 00:15:11.231 "data_size": 65536 00:15:11.231 } 00:15:11.231 ] 00:15:11.231 }' 00:15:11.231 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.232 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.800 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:11.800 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.800 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.800 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.801 [2024-12-09 22:56:27.546143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.801 BaseBdev1 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.801 [ 00:15:11.801 { 00:15:11.801 "name": "BaseBdev1", 00:15:11.801 "aliases": [ 00:15:11.801 "2c137b03-136a-4f92-a84a-849cdc8f9be3" 00:15:11.801 ], 00:15:11.801 "product_name": "Malloc disk", 00:15:11.801 "block_size": 512, 00:15:11.801 "num_blocks": 65536, 00:15:11.801 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:11.801 "assigned_rate_limits": { 00:15:11.801 "rw_ios_per_sec": 0, 00:15:11.801 "rw_mbytes_per_sec": 0, 00:15:11.801 "r_mbytes_per_sec": 0, 00:15:11.801 "w_mbytes_per_sec": 0 00:15:11.801 }, 00:15:11.801 "claimed": true, 00:15:11.801 "claim_type": "exclusive_write", 00:15:11.801 "zoned": false, 00:15:11.801 "supported_io_types": { 00:15:11.801 "read": true, 00:15:11.801 "write": true, 00:15:11.801 "unmap": true, 00:15:11.801 "flush": true, 00:15:11.801 "reset": true, 00:15:11.801 "nvme_admin": false, 00:15:11.801 "nvme_io": false, 00:15:11.801 "nvme_io_md": false, 00:15:11.801 "write_zeroes": true, 00:15:11.801 "zcopy": true, 00:15:11.801 "get_zone_info": false, 00:15:11.801 "zone_management": false, 00:15:11.801 "zone_append": false, 00:15:11.801 "compare": false, 00:15:11.801 "compare_and_write": false, 00:15:11.801 "abort": true, 00:15:11.801 "seek_hole": false, 00:15:11.801 "seek_data": false, 00:15:11.801 "copy": true, 00:15:11.801 "nvme_iov_md": false 00:15:11.801 }, 00:15:11.801 "memory_domains": [ 00:15:11.801 { 00:15:11.801 "dma_device_id": "system", 00:15:11.801 "dma_device_type": 1 00:15:11.801 }, 00:15:11.801 { 00:15:11.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.801 "dma_device_type": 2 00:15:11.801 } 00:15:11.801 ], 00:15:11.801 "driver_specific": {} 00:15:11.801 } 00:15:11.801 ] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.801 "name": "Existed_Raid", 00:15:11.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.801 "strip_size_kb": 64, 00:15:11.801 "state": "configuring", 00:15:11.801 "raid_level": "raid0", 00:15:11.801 "superblock": false, 00:15:11.801 "num_base_bdevs": 3, 00:15:11.801 "num_base_bdevs_discovered": 2, 00:15:11.801 "num_base_bdevs_operational": 3, 00:15:11.801 "base_bdevs_list": [ 00:15:11.801 { 00:15:11.801 "name": "BaseBdev1", 00:15:11.801 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:11.801 "is_configured": true, 00:15:11.801 "data_offset": 0, 00:15:11.801 "data_size": 65536 00:15:11.801 }, 00:15:11.801 { 00:15:11.801 "name": null, 00:15:11.801 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:11.801 "is_configured": false, 00:15:11.801 "data_offset": 0, 00:15:11.801 "data_size": 65536 00:15:11.801 }, 00:15:11.801 { 00:15:11.801 "name": "BaseBdev3", 00:15:11.801 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:11.801 "is_configured": true, 00:15:11.801 "data_offset": 0, 00:15:11.801 "data_size": 65536 00:15:11.801 } 00:15:11.801 ] 00:15:11.801 }' 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.801 22:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:12.370 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.371 [2024-12-09 22:56:28.093309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.371 "name": "Existed_Raid", 00:15:12.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.371 "strip_size_kb": 64, 00:15:12.371 "state": "configuring", 00:15:12.371 "raid_level": "raid0", 00:15:12.371 "superblock": false, 00:15:12.371 "num_base_bdevs": 3, 00:15:12.371 "num_base_bdevs_discovered": 1, 00:15:12.371 "num_base_bdevs_operational": 3, 00:15:12.371 "base_bdevs_list": [ 00:15:12.371 { 00:15:12.371 "name": "BaseBdev1", 00:15:12.371 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:12.371 "is_configured": true, 00:15:12.371 "data_offset": 0, 00:15:12.371 "data_size": 65536 00:15:12.371 }, 00:15:12.371 { 00:15:12.371 "name": null, 00:15:12.371 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:12.371 "is_configured": false, 00:15:12.371 "data_offset": 0, 00:15:12.371 "data_size": 65536 00:15:12.371 }, 00:15:12.371 { 00:15:12.371 "name": null, 00:15:12.371 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:12.371 "is_configured": false, 00:15:12.371 "data_offset": 0, 00:15:12.371 "data_size": 65536 00:15:12.371 } 00:15:12.371 ] 00:15:12.371 }' 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.371 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.940 [2024-12-09 22:56:28.652525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.940 "name": "Existed_Raid", 00:15:12.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.940 "strip_size_kb": 64, 00:15:12.940 "state": "configuring", 00:15:12.940 "raid_level": "raid0", 00:15:12.940 "superblock": false, 00:15:12.940 "num_base_bdevs": 3, 00:15:12.940 "num_base_bdevs_discovered": 2, 00:15:12.940 "num_base_bdevs_operational": 3, 00:15:12.940 "base_bdevs_list": [ 00:15:12.940 { 00:15:12.940 "name": "BaseBdev1", 00:15:12.940 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:12.940 "is_configured": true, 00:15:12.940 "data_offset": 0, 00:15:12.940 "data_size": 65536 00:15:12.940 }, 00:15:12.940 { 00:15:12.940 "name": null, 00:15:12.940 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:12.940 "is_configured": false, 00:15:12.940 "data_offset": 0, 00:15:12.940 "data_size": 65536 00:15:12.940 }, 00:15:12.940 { 00:15:12.940 "name": "BaseBdev3", 00:15:12.940 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:12.940 "is_configured": true, 00:15:12.940 "data_offset": 0, 00:15:12.940 "data_size": 65536 00:15:12.940 } 00:15:12.940 ] 00:15:12.940 }' 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.940 22:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.509 [2024-12-09 22:56:29.171715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.509 "name": "Existed_Raid", 00:15:13.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.509 "strip_size_kb": 64, 00:15:13.509 "state": "configuring", 00:15:13.509 "raid_level": "raid0", 00:15:13.509 "superblock": false, 00:15:13.509 "num_base_bdevs": 3, 00:15:13.509 "num_base_bdevs_discovered": 1, 00:15:13.509 "num_base_bdevs_operational": 3, 00:15:13.509 "base_bdevs_list": [ 00:15:13.509 { 00:15:13.509 "name": null, 00:15:13.509 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:13.509 "is_configured": false, 00:15:13.509 "data_offset": 0, 00:15:13.509 "data_size": 65536 00:15:13.509 }, 00:15:13.509 { 00:15:13.509 "name": null, 00:15:13.509 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:13.509 "is_configured": false, 00:15:13.509 "data_offset": 0, 00:15:13.509 "data_size": 65536 00:15:13.509 }, 00:15:13.509 { 00:15:13.509 "name": "BaseBdev3", 00:15:13.509 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:13.509 "is_configured": true, 00:15:13.509 "data_offset": 0, 00:15:13.509 "data_size": 65536 00:15:13.509 } 00:15:13.509 ] 00:15:13.509 }' 00:15:13.509 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.510 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.079 [2024-12-09 22:56:29.819537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.079 "name": "Existed_Raid", 00:15:14.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.079 "strip_size_kb": 64, 00:15:14.079 "state": "configuring", 00:15:14.079 "raid_level": "raid0", 00:15:14.079 "superblock": false, 00:15:14.079 "num_base_bdevs": 3, 00:15:14.079 "num_base_bdevs_discovered": 2, 00:15:14.079 "num_base_bdevs_operational": 3, 00:15:14.079 "base_bdevs_list": [ 00:15:14.079 { 00:15:14.079 "name": null, 00:15:14.079 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:14.079 "is_configured": false, 00:15:14.079 "data_offset": 0, 00:15:14.079 "data_size": 65536 00:15:14.079 }, 00:15:14.079 { 00:15:14.079 "name": "BaseBdev2", 00:15:14.079 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:14.079 "is_configured": true, 00:15:14.079 "data_offset": 0, 00:15:14.079 "data_size": 65536 00:15:14.079 }, 00:15:14.079 { 00:15:14.079 "name": "BaseBdev3", 00:15:14.079 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:14.079 "is_configured": true, 00:15:14.079 "data_offset": 0, 00:15:14.079 "data_size": 65536 00:15:14.079 } 00:15:14.079 ] 00:15:14.079 }' 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.079 22:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2c137b03-136a-4f92-a84a-849cdc8f9be3 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.648 [2024-12-09 22:56:30.437023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:14.648 [2024-12-09 22:56:30.437179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:14.648 [2024-12-09 22:56:30.437227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:14.648 [2024-12-09 22:56:30.437592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:14.648 [2024-12-09 22:56:30.437846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:14.648 [2024-12-09 22:56:30.437887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:14.648 [2024-12-09 22:56:30.438230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.648 NewBaseBdev 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:14.648 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.649 [ 00:15:14.649 { 00:15:14.649 "name": "NewBaseBdev", 00:15:14.649 "aliases": [ 00:15:14.649 "2c137b03-136a-4f92-a84a-849cdc8f9be3" 00:15:14.649 ], 00:15:14.649 "product_name": "Malloc disk", 00:15:14.649 "block_size": 512, 00:15:14.649 "num_blocks": 65536, 00:15:14.649 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:14.649 "assigned_rate_limits": { 00:15:14.649 "rw_ios_per_sec": 0, 00:15:14.649 "rw_mbytes_per_sec": 0, 00:15:14.649 "r_mbytes_per_sec": 0, 00:15:14.649 "w_mbytes_per_sec": 0 00:15:14.649 }, 00:15:14.649 "claimed": true, 00:15:14.649 "claim_type": "exclusive_write", 00:15:14.649 "zoned": false, 00:15:14.649 "supported_io_types": { 00:15:14.649 "read": true, 00:15:14.649 "write": true, 00:15:14.649 "unmap": true, 00:15:14.649 "flush": true, 00:15:14.649 "reset": true, 00:15:14.649 "nvme_admin": false, 00:15:14.649 "nvme_io": false, 00:15:14.649 "nvme_io_md": false, 00:15:14.649 "write_zeroes": true, 00:15:14.649 "zcopy": true, 00:15:14.649 "get_zone_info": false, 00:15:14.649 "zone_management": false, 00:15:14.649 "zone_append": false, 00:15:14.649 "compare": false, 00:15:14.649 "compare_and_write": false, 00:15:14.649 "abort": true, 00:15:14.649 "seek_hole": false, 00:15:14.649 "seek_data": false, 00:15:14.649 "copy": true, 00:15:14.649 "nvme_iov_md": false 00:15:14.649 }, 00:15:14.649 "memory_domains": [ 00:15:14.649 { 00:15:14.649 "dma_device_id": "system", 00:15:14.649 "dma_device_type": 1 00:15:14.649 }, 00:15:14.649 { 00:15:14.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.649 "dma_device_type": 2 00:15:14.649 } 00:15:14.649 ], 00:15:14.649 "driver_specific": {} 00:15:14.649 } 00:15:14.649 ] 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.649 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.908 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.908 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.908 "name": "Existed_Raid", 00:15:14.908 "uuid": "bf27db4f-c50e-4c86-b536-810a0902c17c", 00:15:14.908 "strip_size_kb": 64, 00:15:14.908 "state": "online", 00:15:14.908 "raid_level": "raid0", 00:15:14.908 "superblock": false, 00:15:14.908 "num_base_bdevs": 3, 00:15:14.908 "num_base_bdevs_discovered": 3, 00:15:14.908 "num_base_bdevs_operational": 3, 00:15:14.908 "base_bdevs_list": [ 00:15:14.908 { 00:15:14.908 "name": "NewBaseBdev", 00:15:14.908 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:14.908 "is_configured": true, 00:15:14.908 "data_offset": 0, 00:15:14.908 "data_size": 65536 00:15:14.908 }, 00:15:14.908 { 00:15:14.908 "name": "BaseBdev2", 00:15:14.908 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:14.908 "is_configured": true, 00:15:14.908 "data_offset": 0, 00:15:14.908 "data_size": 65536 00:15:14.908 }, 00:15:14.908 { 00:15:14.908 "name": "BaseBdev3", 00:15:14.908 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:14.908 "is_configured": true, 00:15:14.908 "data_offset": 0, 00:15:14.908 "data_size": 65536 00:15:14.908 } 00:15:14.908 ] 00:15:14.908 }' 00:15:14.908 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.908 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.167 [2024-12-09 22:56:30.968688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.167 22:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.167 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:15.168 "name": "Existed_Raid", 00:15:15.168 "aliases": [ 00:15:15.168 "bf27db4f-c50e-4c86-b536-810a0902c17c" 00:15:15.168 ], 00:15:15.168 "product_name": "Raid Volume", 00:15:15.168 "block_size": 512, 00:15:15.168 "num_blocks": 196608, 00:15:15.168 "uuid": "bf27db4f-c50e-4c86-b536-810a0902c17c", 00:15:15.168 "assigned_rate_limits": { 00:15:15.168 "rw_ios_per_sec": 0, 00:15:15.168 "rw_mbytes_per_sec": 0, 00:15:15.168 "r_mbytes_per_sec": 0, 00:15:15.168 "w_mbytes_per_sec": 0 00:15:15.168 }, 00:15:15.168 "claimed": false, 00:15:15.168 "zoned": false, 00:15:15.168 "supported_io_types": { 00:15:15.168 "read": true, 00:15:15.168 "write": true, 00:15:15.168 "unmap": true, 00:15:15.168 "flush": true, 00:15:15.168 "reset": true, 00:15:15.168 "nvme_admin": false, 00:15:15.168 "nvme_io": false, 00:15:15.168 "nvme_io_md": false, 00:15:15.168 "write_zeroes": true, 00:15:15.168 "zcopy": false, 00:15:15.168 "get_zone_info": false, 00:15:15.168 "zone_management": false, 00:15:15.168 "zone_append": false, 00:15:15.168 "compare": false, 00:15:15.168 "compare_and_write": false, 00:15:15.168 "abort": false, 00:15:15.168 "seek_hole": false, 00:15:15.168 "seek_data": false, 00:15:15.168 "copy": false, 00:15:15.168 "nvme_iov_md": false 00:15:15.168 }, 00:15:15.168 "memory_domains": [ 00:15:15.168 { 00:15:15.168 "dma_device_id": "system", 00:15:15.168 "dma_device_type": 1 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.168 "dma_device_type": 2 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "dma_device_id": "system", 00:15:15.168 "dma_device_type": 1 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.168 "dma_device_type": 2 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "dma_device_id": "system", 00:15:15.168 "dma_device_type": 1 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.168 "dma_device_type": 2 00:15:15.168 } 00:15:15.168 ], 00:15:15.168 "driver_specific": { 00:15:15.168 "raid": { 00:15:15.168 "uuid": "bf27db4f-c50e-4c86-b536-810a0902c17c", 00:15:15.168 "strip_size_kb": 64, 00:15:15.168 "state": "online", 00:15:15.168 "raid_level": "raid0", 00:15:15.168 "superblock": false, 00:15:15.168 "num_base_bdevs": 3, 00:15:15.168 "num_base_bdevs_discovered": 3, 00:15:15.168 "num_base_bdevs_operational": 3, 00:15:15.168 "base_bdevs_list": [ 00:15:15.168 { 00:15:15.168 "name": "NewBaseBdev", 00:15:15.168 "uuid": "2c137b03-136a-4f92-a84a-849cdc8f9be3", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 0, 00:15:15.168 "data_size": 65536 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "name": "BaseBdev2", 00:15:15.168 "uuid": "d5188895-a65d-4f4d-8e50-c078a7ee5a89", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 0, 00:15:15.168 "data_size": 65536 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "name": "BaseBdev3", 00:15:15.168 "uuid": "06db360f-4e4a-41f9-a85f-f897c2d39154", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 0, 00:15:15.168 "data_size": 65536 00:15:15.168 } 00:15:15.168 ] 00:15:15.168 } 00:15:15.168 } 00:15:15.168 }' 00:15:15.168 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:15.427 BaseBdev2 00:15:15.427 BaseBdev3' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.427 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.428 [2024-12-09 22:56:31.211831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.428 [2024-12-09 22:56:31.211864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.428 [2024-12-09 22:56:31.211967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.428 [2024-12-09 22:56:31.212035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.428 [2024-12-09 22:56:31.212050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64283 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64283 ']' 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64283 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64283 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64283' 00:15:15.428 killing process with pid 64283 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64283 00:15:15.428 [2024-12-09 22:56:31.257738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.428 22:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64283 00:15:15.995 [2024-12-09 22:56:31.602220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.374 22:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:17.374 00:15:17.374 real 0m11.648s 00:15:17.374 user 0m18.151s 00:15:17.374 sys 0m2.241s 00:15:17.374 22:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.374 22:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.374 ************************************ 00:15:17.374 END TEST raid_state_function_test 00:15:17.374 ************************************ 00:15:17.374 22:56:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:17.374 22:56:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:17.374 22:56:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.374 22:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.374 ************************************ 00:15:17.374 START TEST raid_state_function_test_sb 00:15:17.374 ************************************ 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64916 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64916' 00:15:17.374 Process raid pid: 64916 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64916 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64916 ']' 00:15:17.374 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.375 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.375 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.375 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.375 22:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.375 [2024-12-09 22:56:33.126911] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:17.375 [2024-12-09 22:56:33.127114] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.634 [2024-12-09 22:56:33.290556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.634 [2024-12-09 22:56:33.432830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.892 [2024-12-09 22:56:33.694165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.892 [2024-12-09 22:56:33.694361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.151 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.151 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:18.151 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:18.151 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.151 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.410 [2024-12-09 22:56:34.008702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.410 [2024-12-09 22:56:34.008822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.410 [2024-12-09 22:56:34.008877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.410 [2024-12-09 22:56:34.008940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.410 [2024-12-09 22:56:34.008979] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.410 [2024-12-09 22:56:34.009007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.410 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.410 "name": "Existed_Raid", 00:15:18.410 "uuid": "f45a9c37-bd9f-4ddf-9d39-f6a1ded11713", 00:15:18.410 "strip_size_kb": 64, 00:15:18.410 "state": "configuring", 00:15:18.410 "raid_level": "raid0", 00:15:18.410 "superblock": true, 00:15:18.410 "num_base_bdevs": 3, 00:15:18.410 "num_base_bdevs_discovered": 0, 00:15:18.411 "num_base_bdevs_operational": 3, 00:15:18.411 "base_bdevs_list": [ 00:15:18.411 { 00:15:18.411 "name": "BaseBdev1", 00:15:18.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.411 "is_configured": false, 00:15:18.411 "data_offset": 0, 00:15:18.411 "data_size": 0 00:15:18.411 }, 00:15:18.411 { 00:15:18.411 "name": "BaseBdev2", 00:15:18.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.411 "is_configured": false, 00:15:18.411 "data_offset": 0, 00:15:18.411 "data_size": 0 00:15:18.411 }, 00:15:18.411 { 00:15:18.411 "name": "BaseBdev3", 00:15:18.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.411 "is_configured": false, 00:15:18.411 "data_offset": 0, 00:15:18.411 "data_size": 0 00:15:18.411 } 00:15:18.411 ] 00:15:18.411 }' 00:15:18.411 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.411 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 [2024-12-09 22:56:34.440053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.670 [2024-12-09 22:56:34.440169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 [2024-12-09 22:56:34.448020] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.670 [2024-12-09 22:56:34.448079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.670 [2024-12-09 22:56:34.448091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.670 [2024-12-09 22:56:34.448102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.670 [2024-12-09 22:56:34.448110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.670 [2024-12-09 22:56:34.448121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 [2024-12-09 22:56:34.500312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.670 BaseBdev1 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 [ 00:15:18.670 { 00:15:18.930 "name": "BaseBdev1", 00:15:18.930 "aliases": [ 00:15:18.930 "59a0dc65-33fa-4860-bc49-7674311313fe" 00:15:18.930 ], 00:15:18.930 "product_name": "Malloc disk", 00:15:18.930 "block_size": 512, 00:15:18.930 "num_blocks": 65536, 00:15:18.930 "uuid": "59a0dc65-33fa-4860-bc49-7674311313fe", 00:15:18.930 "assigned_rate_limits": { 00:15:18.930 "rw_ios_per_sec": 0, 00:15:18.930 "rw_mbytes_per_sec": 0, 00:15:18.930 "r_mbytes_per_sec": 0, 00:15:18.930 "w_mbytes_per_sec": 0 00:15:18.930 }, 00:15:18.930 "claimed": true, 00:15:18.930 "claim_type": "exclusive_write", 00:15:18.930 "zoned": false, 00:15:18.930 "supported_io_types": { 00:15:18.930 "read": true, 00:15:18.930 "write": true, 00:15:18.930 "unmap": true, 00:15:18.930 "flush": true, 00:15:18.930 "reset": true, 00:15:18.930 "nvme_admin": false, 00:15:18.930 "nvme_io": false, 00:15:18.930 "nvme_io_md": false, 00:15:18.930 "write_zeroes": true, 00:15:18.930 "zcopy": true, 00:15:18.930 "get_zone_info": false, 00:15:18.930 "zone_management": false, 00:15:18.930 "zone_append": false, 00:15:18.930 "compare": false, 00:15:18.930 "compare_and_write": false, 00:15:18.930 "abort": true, 00:15:18.930 "seek_hole": false, 00:15:18.930 "seek_data": false, 00:15:18.930 "copy": true, 00:15:18.930 "nvme_iov_md": false 00:15:18.930 }, 00:15:18.930 "memory_domains": [ 00:15:18.930 { 00:15:18.930 "dma_device_id": "system", 00:15:18.930 "dma_device_type": 1 00:15:18.930 }, 00:15:18.930 { 00:15:18.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.930 "dma_device_type": 2 00:15:18.930 } 00:15:18.930 ], 00:15:18.930 "driver_specific": {} 00:15:18.930 } 00:15:18.930 ] 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.930 "name": "Existed_Raid", 00:15:18.930 "uuid": "97e0e199-b37e-4cda-923f-a3468200de2c", 00:15:18.930 "strip_size_kb": 64, 00:15:18.930 "state": "configuring", 00:15:18.930 "raid_level": "raid0", 00:15:18.930 "superblock": true, 00:15:18.930 "num_base_bdevs": 3, 00:15:18.930 "num_base_bdevs_discovered": 1, 00:15:18.930 "num_base_bdevs_operational": 3, 00:15:18.930 "base_bdevs_list": [ 00:15:18.930 { 00:15:18.930 "name": "BaseBdev1", 00:15:18.930 "uuid": "59a0dc65-33fa-4860-bc49-7674311313fe", 00:15:18.930 "is_configured": true, 00:15:18.930 "data_offset": 2048, 00:15:18.930 "data_size": 63488 00:15:18.930 }, 00:15:18.930 { 00:15:18.930 "name": "BaseBdev2", 00:15:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.930 "is_configured": false, 00:15:18.930 "data_offset": 0, 00:15:18.930 "data_size": 0 00:15:18.930 }, 00:15:18.930 { 00:15:18.930 "name": "BaseBdev3", 00:15:18.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.930 "is_configured": false, 00:15:18.930 "data_offset": 0, 00:15:18.930 "data_size": 0 00:15:18.930 } 00:15:18.930 ] 00:15:18.930 }' 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.930 22:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.190 [2024-12-09 22:56:35.019539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.190 [2024-12-09 22:56:35.019662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.190 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.191 [2024-12-09 22:56:35.031574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.191 [2024-12-09 22:56:35.033999] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.191 [2024-12-09 22:56:35.034080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.191 [2024-12-09 22:56:35.034112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.191 [2024-12-09 22:56:35.034136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.191 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.449 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.449 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.449 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.449 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.449 "name": "Existed_Raid", 00:15:19.449 "uuid": "806d3404-91e6-41f8-9476-671f695ad9d6", 00:15:19.449 "strip_size_kb": 64, 00:15:19.449 "state": "configuring", 00:15:19.449 "raid_level": "raid0", 00:15:19.449 "superblock": true, 00:15:19.449 "num_base_bdevs": 3, 00:15:19.449 "num_base_bdevs_discovered": 1, 00:15:19.449 "num_base_bdevs_operational": 3, 00:15:19.449 "base_bdevs_list": [ 00:15:19.449 { 00:15:19.449 "name": "BaseBdev1", 00:15:19.449 "uuid": "59a0dc65-33fa-4860-bc49-7674311313fe", 00:15:19.449 "is_configured": true, 00:15:19.449 "data_offset": 2048, 00:15:19.449 "data_size": 63488 00:15:19.449 }, 00:15:19.449 { 00:15:19.449 "name": "BaseBdev2", 00:15:19.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.449 "is_configured": false, 00:15:19.449 "data_offset": 0, 00:15:19.449 "data_size": 0 00:15:19.449 }, 00:15:19.449 { 00:15:19.449 "name": "BaseBdev3", 00:15:19.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.449 "is_configured": false, 00:15:19.449 "data_offset": 0, 00:15:19.449 "data_size": 0 00:15:19.449 } 00:15:19.449 ] 00:15:19.449 }' 00:15:19.449 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.449 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.707 [2024-12-09 22:56:35.555560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.707 BaseBdev2 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.707 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 [ 00:15:19.967 { 00:15:19.967 "name": "BaseBdev2", 00:15:19.967 "aliases": [ 00:15:19.967 "c69095d2-7866-44b3-aa52-63690a46c9da" 00:15:19.967 ], 00:15:19.967 "product_name": "Malloc disk", 00:15:19.967 "block_size": 512, 00:15:19.967 "num_blocks": 65536, 00:15:19.967 "uuid": "c69095d2-7866-44b3-aa52-63690a46c9da", 00:15:19.967 "assigned_rate_limits": { 00:15:19.967 "rw_ios_per_sec": 0, 00:15:19.967 "rw_mbytes_per_sec": 0, 00:15:19.967 "r_mbytes_per_sec": 0, 00:15:19.967 "w_mbytes_per_sec": 0 00:15:19.967 }, 00:15:19.967 "claimed": true, 00:15:19.967 "claim_type": "exclusive_write", 00:15:19.967 "zoned": false, 00:15:19.967 "supported_io_types": { 00:15:19.967 "read": true, 00:15:19.967 "write": true, 00:15:19.967 "unmap": true, 00:15:19.967 "flush": true, 00:15:19.967 "reset": true, 00:15:19.967 "nvme_admin": false, 00:15:19.967 "nvme_io": false, 00:15:19.967 "nvme_io_md": false, 00:15:19.967 "write_zeroes": true, 00:15:19.967 "zcopy": true, 00:15:19.967 "get_zone_info": false, 00:15:19.967 "zone_management": false, 00:15:19.967 "zone_append": false, 00:15:19.967 "compare": false, 00:15:19.967 "compare_and_write": false, 00:15:19.967 "abort": true, 00:15:19.967 "seek_hole": false, 00:15:19.967 "seek_data": false, 00:15:19.967 "copy": true, 00:15:19.967 "nvme_iov_md": false 00:15:19.967 }, 00:15:19.967 "memory_domains": [ 00:15:19.967 { 00:15:19.967 "dma_device_id": "system", 00:15:19.967 "dma_device_type": 1 00:15:19.967 }, 00:15:19.967 { 00:15:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.967 "dma_device_type": 2 00:15:19.967 } 00:15:19.967 ], 00:15:19.967 "driver_specific": {} 00:15:19.967 } 00:15:19.967 ] 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.967 "name": "Existed_Raid", 00:15:19.967 "uuid": "806d3404-91e6-41f8-9476-671f695ad9d6", 00:15:19.967 "strip_size_kb": 64, 00:15:19.967 "state": "configuring", 00:15:19.967 "raid_level": "raid0", 00:15:19.967 "superblock": true, 00:15:19.967 "num_base_bdevs": 3, 00:15:19.967 "num_base_bdevs_discovered": 2, 00:15:19.967 "num_base_bdevs_operational": 3, 00:15:19.967 "base_bdevs_list": [ 00:15:19.967 { 00:15:19.967 "name": "BaseBdev1", 00:15:19.967 "uuid": "59a0dc65-33fa-4860-bc49-7674311313fe", 00:15:19.967 "is_configured": true, 00:15:19.967 "data_offset": 2048, 00:15:19.967 "data_size": 63488 00:15:19.967 }, 00:15:19.967 { 00:15:19.967 "name": "BaseBdev2", 00:15:19.967 "uuid": "c69095d2-7866-44b3-aa52-63690a46c9da", 00:15:19.967 "is_configured": true, 00:15:19.967 "data_offset": 2048, 00:15:19.967 "data_size": 63488 00:15:19.967 }, 00:15:19.967 { 00:15:19.967 "name": "BaseBdev3", 00:15:19.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.967 "is_configured": false, 00:15:19.967 "data_offset": 0, 00:15:19.967 "data_size": 0 00:15:19.967 } 00:15:19.967 ] 00:15:19.967 }' 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.967 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.227 22:56:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:20.227 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.227 22:56:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.227 [2024-12-09 22:56:36.052664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.227 [2024-12-09 22:56:36.053150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:20.227 [2024-12-09 22:56:36.053223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:20.227 [2024-12-09 22:56:36.053610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:20.227 BaseBdev3 00:15:20.227 [2024-12-09 22:56:36.053851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:20.227 [2024-12-09 22:56:36.053913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:20.227 [2024-12-09 22:56:36.054116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.227 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.227 [ 00:15:20.227 { 00:15:20.227 "name": "BaseBdev3", 00:15:20.227 "aliases": [ 00:15:20.227 "0c97d694-2b5f-4584-af95-aef0e076199e" 00:15:20.227 ], 00:15:20.227 "product_name": "Malloc disk", 00:15:20.227 "block_size": 512, 00:15:20.227 "num_blocks": 65536, 00:15:20.228 "uuid": "0c97d694-2b5f-4584-af95-aef0e076199e", 00:15:20.228 "assigned_rate_limits": { 00:15:20.228 "rw_ios_per_sec": 0, 00:15:20.228 "rw_mbytes_per_sec": 0, 00:15:20.228 "r_mbytes_per_sec": 0, 00:15:20.228 "w_mbytes_per_sec": 0 00:15:20.228 }, 00:15:20.228 "claimed": true, 00:15:20.228 "claim_type": "exclusive_write", 00:15:20.228 "zoned": false, 00:15:20.487 "supported_io_types": { 00:15:20.487 "read": true, 00:15:20.487 "write": true, 00:15:20.487 "unmap": true, 00:15:20.487 "flush": true, 00:15:20.487 "reset": true, 00:15:20.487 "nvme_admin": false, 00:15:20.487 "nvme_io": false, 00:15:20.487 "nvme_io_md": false, 00:15:20.487 "write_zeroes": true, 00:15:20.487 "zcopy": true, 00:15:20.487 "get_zone_info": false, 00:15:20.487 "zone_management": false, 00:15:20.487 "zone_append": false, 00:15:20.487 "compare": false, 00:15:20.487 "compare_and_write": false, 00:15:20.487 "abort": true, 00:15:20.487 "seek_hole": false, 00:15:20.487 "seek_data": false, 00:15:20.487 "copy": true, 00:15:20.487 "nvme_iov_md": false 00:15:20.487 }, 00:15:20.487 "memory_domains": [ 00:15:20.487 { 00:15:20.487 "dma_device_id": "system", 00:15:20.487 "dma_device_type": 1 00:15:20.487 }, 00:15:20.487 { 00:15:20.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.487 "dma_device_type": 2 00:15:20.487 } 00:15:20.487 ], 00:15:20.487 "driver_specific": {} 00:15:20.487 } 00:15:20.487 ] 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.487 "name": "Existed_Raid", 00:15:20.487 "uuid": "806d3404-91e6-41f8-9476-671f695ad9d6", 00:15:20.487 "strip_size_kb": 64, 00:15:20.487 "state": "online", 00:15:20.487 "raid_level": "raid0", 00:15:20.487 "superblock": true, 00:15:20.487 "num_base_bdevs": 3, 00:15:20.487 "num_base_bdevs_discovered": 3, 00:15:20.487 "num_base_bdevs_operational": 3, 00:15:20.487 "base_bdevs_list": [ 00:15:20.487 { 00:15:20.487 "name": "BaseBdev1", 00:15:20.487 "uuid": "59a0dc65-33fa-4860-bc49-7674311313fe", 00:15:20.487 "is_configured": true, 00:15:20.487 "data_offset": 2048, 00:15:20.487 "data_size": 63488 00:15:20.487 }, 00:15:20.487 { 00:15:20.487 "name": "BaseBdev2", 00:15:20.487 "uuid": "c69095d2-7866-44b3-aa52-63690a46c9da", 00:15:20.487 "is_configured": true, 00:15:20.487 "data_offset": 2048, 00:15:20.487 "data_size": 63488 00:15:20.487 }, 00:15:20.487 { 00:15:20.487 "name": "BaseBdev3", 00:15:20.487 "uuid": "0c97d694-2b5f-4584-af95-aef0e076199e", 00:15:20.487 "is_configured": true, 00:15:20.487 "data_offset": 2048, 00:15:20.487 "data_size": 63488 00:15:20.487 } 00:15:20.487 ] 00:15:20.487 }' 00:15:20.487 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.488 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.745 [2024-12-09 22:56:36.560308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.745 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.004 "name": "Existed_Raid", 00:15:21.004 "aliases": [ 00:15:21.004 "806d3404-91e6-41f8-9476-671f695ad9d6" 00:15:21.004 ], 00:15:21.004 "product_name": "Raid Volume", 00:15:21.004 "block_size": 512, 00:15:21.004 "num_blocks": 190464, 00:15:21.004 "uuid": "806d3404-91e6-41f8-9476-671f695ad9d6", 00:15:21.004 "assigned_rate_limits": { 00:15:21.004 "rw_ios_per_sec": 0, 00:15:21.004 "rw_mbytes_per_sec": 0, 00:15:21.004 "r_mbytes_per_sec": 0, 00:15:21.004 "w_mbytes_per_sec": 0 00:15:21.004 }, 00:15:21.004 "claimed": false, 00:15:21.004 "zoned": false, 00:15:21.004 "supported_io_types": { 00:15:21.004 "read": true, 00:15:21.004 "write": true, 00:15:21.004 "unmap": true, 00:15:21.004 "flush": true, 00:15:21.004 "reset": true, 00:15:21.004 "nvme_admin": false, 00:15:21.004 "nvme_io": false, 00:15:21.004 "nvme_io_md": false, 00:15:21.004 "write_zeroes": true, 00:15:21.004 "zcopy": false, 00:15:21.004 "get_zone_info": false, 00:15:21.004 "zone_management": false, 00:15:21.004 "zone_append": false, 00:15:21.004 "compare": false, 00:15:21.004 "compare_and_write": false, 00:15:21.004 "abort": false, 00:15:21.004 "seek_hole": false, 00:15:21.004 "seek_data": false, 00:15:21.004 "copy": false, 00:15:21.004 "nvme_iov_md": false 00:15:21.004 }, 00:15:21.004 "memory_domains": [ 00:15:21.004 { 00:15:21.004 "dma_device_id": "system", 00:15:21.004 "dma_device_type": 1 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.004 "dma_device_type": 2 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "dma_device_id": "system", 00:15:21.004 "dma_device_type": 1 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.004 "dma_device_type": 2 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "dma_device_id": "system", 00:15:21.004 "dma_device_type": 1 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.004 "dma_device_type": 2 00:15:21.004 } 00:15:21.004 ], 00:15:21.004 "driver_specific": { 00:15:21.004 "raid": { 00:15:21.004 "uuid": "806d3404-91e6-41f8-9476-671f695ad9d6", 00:15:21.004 "strip_size_kb": 64, 00:15:21.004 "state": "online", 00:15:21.004 "raid_level": "raid0", 00:15:21.004 "superblock": true, 00:15:21.004 "num_base_bdevs": 3, 00:15:21.004 "num_base_bdevs_discovered": 3, 00:15:21.004 "num_base_bdevs_operational": 3, 00:15:21.004 "base_bdevs_list": [ 00:15:21.004 { 00:15:21.004 "name": "BaseBdev1", 00:15:21.004 "uuid": "59a0dc65-33fa-4860-bc49-7674311313fe", 00:15:21.004 "is_configured": true, 00:15:21.004 "data_offset": 2048, 00:15:21.004 "data_size": 63488 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "name": "BaseBdev2", 00:15:21.004 "uuid": "c69095d2-7866-44b3-aa52-63690a46c9da", 00:15:21.004 "is_configured": true, 00:15:21.004 "data_offset": 2048, 00:15:21.004 "data_size": 63488 00:15:21.004 }, 00:15:21.004 { 00:15:21.004 "name": "BaseBdev3", 00:15:21.004 "uuid": "0c97d694-2b5f-4584-af95-aef0e076199e", 00:15:21.004 "is_configured": true, 00:15:21.004 "data_offset": 2048, 00:15:21.004 "data_size": 63488 00:15:21.004 } 00:15:21.004 ] 00:15:21.004 } 00:15:21.004 } 00:15:21.004 }' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:21.004 BaseBdev2 00:15:21.004 BaseBdev3' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.004 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.004 [2024-12-09 22:56:36.803607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.005 [2024-12-09 22:56:36.803651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.005 [2024-12-09 22:56:36.803719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.309 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.309 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.310 "name": "Existed_Raid", 00:15:21.310 "uuid": "806d3404-91e6-41f8-9476-671f695ad9d6", 00:15:21.310 "strip_size_kb": 64, 00:15:21.310 "state": "offline", 00:15:21.310 "raid_level": "raid0", 00:15:21.310 "superblock": true, 00:15:21.310 "num_base_bdevs": 3, 00:15:21.310 "num_base_bdevs_discovered": 2, 00:15:21.310 "num_base_bdevs_operational": 2, 00:15:21.310 "base_bdevs_list": [ 00:15:21.310 { 00:15:21.310 "name": null, 00:15:21.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.310 "is_configured": false, 00:15:21.310 "data_offset": 0, 00:15:21.310 "data_size": 63488 00:15:21.310 }, 00:15:21.310 { 00:15:21.310 "name": "BaseBdev2", 00:15:21.310 "uuid": "c69095d2-7866-44b3-aa52-63690a46c9da", 00:15:21.310 "is_configured": true, 00:15:21.310 "data_offset": 2048, 00:15:21.310 "data_size": 63488 00:15:21.310 }, 00:15:21.310 { 00:15:21.310 "name": "BaseBdev3", 00:15:21.310 "uuid": "0c97d694-2b5f-4584-af95-aef0e076199e", 00:15:21.310 "is_configured": true, 00:15:21.310 "data_offset": 2048, 00:15:21.310 "data_size": 63488 00:15:21.310 } 00:15:21.310 ] 00:15:21.310 }' 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.310 22:56:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.569 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:21.569 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:21.569 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.569 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:21.569 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.569 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.829 [2024-12-09 22:56:37.469828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.829 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.829 [2024-12-09 22:56:37.640802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:21.829 [2024-12-09 22:56:37.640917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 BaseBdev2 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.100 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.100 [ 00:15:22.100 { 00:15:22.100 "name": "BaseBdev2", 00:15:22.100 "aliases": [ 00:15:22.100 "d19cbb86-e41d-454d-abfd-91f35c3bb3ae" 00:15:22.100 ], 00:15:22.100 "product_name": "Malloc disk", 00:15:22.100 "block_size": 512, 00:15:22.100 "num_blocks": 65536, 00:15:22.100 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:22.100 "assigned_rate_limits": { 00:15:22.100 "rw_ios_per_sec": 0, 00:15:22.100 "rw_mbytes_per_sec": 0, 00:15:22.100 "r_mbytes_per_sec": 0, 00:15:22.100 "w_mbytes_per_sec": 0 00:15:22.100 }, 00:15:22.100 "claimed": false, 00:15:22.100 "zoned": false, 00:15:22.100 "supported_io_types": { 00:15:22.100 "read": true, 00:15:22.100 "write": true, 00:15:22.100 "unmap": true, 00:15:22.100 "flush": true, 00:15:22.100 "reset": true, 00:15:22.100 "nvme_admin": false, 00:15:22.100 "nvme_io": false, 00:15:22.100 "nvme_io_md": false, 00:15:22.100 "write_zeroes": true, 00:15:22.100 "zcopy": true, 00:15:22.100 "get_zone_info": false, 00:15:22.100 "zone_management": false, 00:15:22.100 "zone_append": false, 00:15:22.100 "compare": false, 00:15:22.101 "compare_and_write": false, 00:15:22.101 "abort": true, 00:15:22.101 "seek_hole": false, 00:15:22.101 "seek_data": false, 00:15:22.101 "copy": true, 00:15:22.101 "nvme_iov_md": false 00:15:22.101 }, 00:15:22.101 "memory_domains": [ 00:15:22.101 { 00:15:22.101 "dma_device_id": "system", 00:15:22.101 "dma_device_type": 1 00:15:22.101 }, 00:15:22.101 { 00:15:22.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.101 "dma_device_type": 2 00:15:22.101 } 00:15:22.101 ], 00:15:22.101 "driver_specific": {} 00:15:22.101 } 00:15:22.101 ] 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.101 BaseBdev3 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.101 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 [ 00:15:22.367 { 00:15:22.367 "name": "BaseBdev3", 00:15:22.367 "aliases": [ 00:15:22.367 "7abf860a-7059-449c-ac65-9938335884fe" 00:15:22.367 ], 00:15:22.367 "product_name": "Malloc disk", 00:15:22.367 "block_size": 512, 00:15:22.367 "num_blocks": 65536, 00:15:22.367 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:22.367 "assigned_rate_limits": { 00:15:22.367 "rw_ios_per_sec": 0, 00:15:22.367 "rw_mbytes_per_sec": 0, 00:15:22.367 "r_mbytes_per_sec": 0, 00:15:22.367 "w_mbytes_per_sec": 0 00:15:22.367 }, 00:15:22.367 "claimed": false, 00:15:22.367 "zoned": false, 00:15:22.367 "supported_io_types": { 00:15:22.367 "read": true, 00:15:22.367 "write": true, 00:15:22.367 "unmap": true, 00:15:22.367 "flush": true, 00:15:22.367 "reset": true, 00:15:22.367 "nvme_admin": false, 00:15:22.367 "nvme_io": false, 00:15:22.367 "nvme_io_md": false, 00:15:22.367 "write_zeroes": true, 00:15:22.367 "zcopy": true, 00:15:22.367 "get_zone_info": false, 00:15:22.367 "zone_management": false, 00:15:22.367 "zone_append": false, 00:15:22.367 "compare": false, 00:15:22.367 "compare_and_write": false, 00:15:22.367 "abort": true, 00:15:22.367 "seek_hole": false, 00:15:22.367 "seek_data": false, 00:15:22.367 "copy": true, 00:15:22.367 "nvme_iov_md": false 00:15:22.367 }, 00:15:22.367 "memory_domains": [ 00:15:22.367 { 00:15:22.367 "dma_device_id": "system", 00:15:22.367 "dma_device_type": 1 00:15:22.367 }, 00:15:22.367 { 00:15:22.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.367 "dma_device_type": 2 00:15:22.367 } 00:15:22.367 ], 00:15:22.367 "driver_specific": {} 00:15:22.367 } 00:15:22.367 ] 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.367 22:56:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 [2024-12-09 22:56:37.999184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.367 [2024-12-09 22:56:37.999291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.367 [2024-12-09 22:56:37.999353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.367 [2024-12-09 22:56:38.001803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.367 "name": "Existed_Raid", 00:15:22.367 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:22.367 "strip_size_kb": 64, 00:15:22.367 "state": "configuring", 00:15:22.367 "raid_level": "raid0", 00:15:22.367 "superblock": true, 00:15:22.367 "num_base_bdevs": 3, 00:15:22.367 "num_base_bdevs_discovered": 2, 00:15:22.367 "num_base_bdevs_operational": 3, 00:15:22.367 "base_bdevs_list": [ 00:15:22.367 { 00:15:22.367 "name": "BaseBdev1", 00:15:22.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.367 "is_configured": false, 00:15:22.367 "data_offset": 0, 00:15:22.367 "data_size": 0 00:15:22.367 }, 00:15:22.367 { 00:15:22.367 "name": "BaseBdev2", 00:15:22.367 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:22.367 "is_configured": true, 00:15:22.367 "data_offset": 2048, 00:15:22.367 "data_size": 63488 00:15:22.367 }, 00:15:22.367 { 00:15:22.367 "name": "BaseBdev3", 00:15:22.367 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:22.367 "is_configured": true, 00:15:22.367 "data_offset": 2048, 00:15:22.367 "data_size": 63488 00:15:22.367 } 00:15:22.367 ] 00:15:22.367 }' 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.367 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.626 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:22.626 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.626 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.626 [2024-12-09 22:56:38.438454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.626 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.627 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.885 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.885 "name": "Existed_Raid", 00:15:22.885 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:22.885 "strip_size_kb": 64, 00:15:22.885 "state": "configuring", 00:15:22.885 "raid_level": "raid0", 00:15:22.885 "superblock": true, 00:15:22.885 "num_base_bdevs": 3, 00:15:22.885 "num_base_bdevs_discovered": 1, 00:15:22.885 "num_base_bdevs_operational": 3, 00:15:22.885 "base_bdevs_list": [ 00:15:22.885 { 00:15:22.885 "name": "BaseBdev1", 00:15:22.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.885 "is_configured": false, 00:15:22.885 "data_offset": 0, 00:15:22.885 "data_size": 0 00:15:22.885 }, 00:15:22.885 { 00:15:22.885 "name": null, 00:15:22.885 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:22.885 "is_configured": false, 00:15:22.885 "data_offset": 0, 00:15:22.885 "data_size": 63488 00:15:22.885 }, 00:15:22.885 { 00:15:22.885 "name": "BaseBdev3", 00:15:22.885 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:22.885 "is_configured": true, 00:15:22.885 "data_offset": 2048, 00:15:22.885 "data_size": 63488 00:15:22.885 } 00:15:22.885 ] 00:15:22.885 }' 00:15:22.885 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.885 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.144 22:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.403 [2024-12-09 22:56:39.013137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.403 BaseBdev1 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.403 [ 00:15:23.403 { 00:15:23.403 "name": "BaseBdev1", 00:15:23.403 "aliases": [ 00:15:23.403 "4088a626-cada-4713-b13f-3107514790d4" 00:15:23.403 ], 00:15:23.403 "product_name": "Malloc disk", 00:15:23.403 "block_size": 512, 00:15:23.403 "num_blocks": 65536, 00:15:23.403 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:23.403 "assigned_rate_limits": { 00:15:23.403 "rw_ios_per_sec": 0, 00:15:23.403 "rw_mbytes_per_sec": 0, 00:15:23.403 "r_mbytes_per_sec": 0, 00:15:23.403 "w_mbytes_per_sec": 0 00:15:23.403 }, 00:15:23.403 "claimed": true, 00:15:23.403 "claim_type": "exclusive_write", 00:15:23.403 "zoned": false, 00:15:23.403 "supported_io_types": { 00:15:23.403 "read": true, 00:15:23.403 "write": true, 00:15:23.403 "unmap": true, 00:15:23.403 "flush": true, 00:15:23.403 "reset": true, 00:15:23.403 "nvme_admin": false, 00:15:23.403 "nvme_io": false, 00:15:23.403 "nvme_io_md": false, 00:15:23.403 "write_zeroes": true, 00:15:23.403 "zcopy": true, 00:15:23.403 "get_zone_info": false, 00:15:23.403 "zone_management": false, 00:15:23.403 "zone_append": false, 00:15:23.403 "compare": false, 00:15:23.403 "compare_and_write": false, 00:15:23.403 "abort": true, 00:15:23.403 "seek_hole": false, 00:15:23.403 "seek_data": false, 00:15:23.403 "copy": true, 00:15:23.403 "nvme_iov_md": false 00:15:23.403 }, 00:15:23.403 "memory_domains": [ 00:15:23.403 { 00:15:23.403 "dma_device_id": "system", 00:15:23.403 "dma_device_type": 1 00:15:23.403 }, 00:15:23.403 { 00:15:23.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.403 "dma_device_type": 2 00:15:23.403 } 00:15:23.403 ], 00:15:23.403 "driver_specific": {} 00:15:23.403 } 00:15:23.403 ] 00:15:23.403 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.404 "name": "Existed_Raid", 00:15:23.404 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:23.404 "strip_size_kb": 64, 00:15:23.404 "state": "configuring", 00:15:23.404 "raid_level": "raid0", 00:15:23.404 "superblock": true, 00:15:23.404 "num_base_bdevs": 3, 00:15:23.404 "num_base_bdevs_discovered": 2, 00:15:23.404 "num_base_bdevs_operational": 3, 00:15:23.404 "base_bdevs_list": [ 00:15:23.404 { 00:15:23.404 "name": "BaseBdev1", 00:15:23.404 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:23.404 "is_configured": true, 00:15:23.404 "data_offset": 2048, 00:15:23.404 "data_size": 63488 00:15:23.404 }, 00:15:23.404 { 00:15:23.404 "name": null, 00:15:23.404 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:23.404 "is_configured": false, 00:15:23.404 "data_offset": 0, 00:15:23.404 "data_size": 63488 00:15:23.404 }, 00:15:23.404 { 00:15:23.404 "name": "BaseBdev3", 00:15:23.404 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:23.404 "is_configured": true, 00:15:23.404 "data_offset": 2048, 00:15:23.404 "data_size": 63488 00:15:23.404 } 00:15:23.404 ] 00:15:23.404 }' 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.404 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.973 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.973 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.973 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.973 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:23.973 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.974 [2024-12-09 22:56:39.572325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.974 "name": "Existed_Raid", 00:15:23.974 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:23.974 "strip_size_kb": 64, 00:15:23.974 "state": "configuring", 00:15:23.974 "raid_level": "raid0", 00:15:23.974 "superblock": true, 00:15:23.974 "num_base_bdevs": 3, 00:15:23.974 "num_base_bdevs_discovered": 1, 00:15:23.974 "num_base_bdevs_operational": 3, 00:15:23.974 "base_bdevs_list": [ 00:15:23.974 { 00:15:23.974 "name": "BaseBdev1", 00:15:23.974 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:23.974 "is_configured": true, 00:15:23.974 "data_offset": 2048, 00:15:23.974 "data_size": 63488 00:15:23.974 }, 00:15:23.974 { 00:15:23.974 "name": null, 00:15:23.974 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:23.974 "is_configured": false, 00:15:23.974 "data_offset": 0, 00:15:23.974 "data_size": 63488 00:15:23.974 }, 00:15:23.974 { 00:15:23.974 "name": null, 00:15:23.974 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:23.974 "is_configured": false, 00:15:23.974 "data_offset": 0, 00:15:23.974 "data_size": 63488 00:15:23.974 } 00:15:23.974 ] 00:15:23.974 }' 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.974 22:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.232 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.232 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.232 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.232 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.491 [2024-12-09 22:56:40.111430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.491 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.491 "name": "Existed_Raid", 00:15:24.491 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:24.491 "strip_size_kb": 64, 00:15:24.491 "state": "configuring", 00:15:24.491 "raid_level": "raid0", 00:15:24.491 "superblock": true, 00:15:24.491 "num_base_bdevs": 3, 00:15:24.491 "num_base_bdevs_discovered": 2, 00:15:24.491 "num_base_bdevs_operational": 3, 00:15:24.491 "base_bdevs_list": [ 00:15:24.491 { 00:15:24.491 "name": "BaseBdev1", 00:15:24.491 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:24.491 "is_configured": true, 00:15:24.491 "data_offset": 2048, 00:15:24.491 "data_size": 63488 00:15:24.491 }, 00:15:24.491 { 00:15:24.491 "name": null, 00:15:24.492 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:24.492 "is_configured": false, 00:15:24.492 "data_offset": 0, 00:15:24.492 "data_size": 63488 00:15:24.492 }, 00:15:24.492 { 00:15:24.492 "name": "BaseBdev3", 00:15:24.492 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:24.492 "is_configured": true, 00:15:24.492 "data_offset": 2048, 00:15:24.492 "data_size": 63488 00:15:24.492 } 00:15:24.492 ] 00:15:24.492 }' 00:15:24.492 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.492 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.751 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.751 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.751 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.751 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.751 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.010 [2024-12-09 22:56:40.614650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.010 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.011 "name": "Existed_Raid", 00:15:25.011 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:25.011 "strip_size_kb": 64, 00:15:25.011 "state": "configuring", 00:15:25.011 "raid_level": "raid0", 00:15:25.011 "superblock": true, 00:15:25.011 "num_base_bdevs": 3, 00:15:25.011 "num_base_bdevs_discovered": 1, 00:15:25.011 "num_base_bdevs_operational": 3, 00:15:25.011 "base_bdevs_list": [ 00:15:25.011 { 00:15:25.011 "name": null, 00:15:25.011 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:25.011 "is_configured": false, 00:15:25.011 "data_offset": 0, 00:15:25.011 "data_size": 63488 00:15:25.011 }, 00:15:25.011 { 00:15:25.011 "name": null, 00:15:25.011 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:25.011 "is_configured": false, 00:15:25.011 "data_offset": 0, 00:15:25.011 "data_size": 63488 00:15:25.011 }, 00:15:25.011 { 00:15:25.011 "name": "BaseBdev3", 00:15:25.011 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:25.011 "is_configured": true, 00:15:25.011 "data_offset": 2048, 00:15:25.011 "data_size": 63488 00:15:25.011 } 00:15:25.011 ] 00:15:25.011 }' 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.011 22:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.580 [2024-12-09 22:56:41.244682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.580 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.581 "name": "Existed_Raid", 00:15:25.581 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:25.581 "strip_size_kb": 64, 00:15:25.581 "state": "configuring", 00:15:25.581 "raid_level": "raid0", 00:15:25.581 "superblock": true, 00:15:25.581 "num_base_bdevs": 3, 00:15:25.581 "num_base_bdevs_discovered": 2, 00:15:25.581 "num_base_bdevs_operational": 3, 00:15:25.581 "base_bdevs_list": [ 00:15:25.581 { 00:15:25.581 "name": null, 00:15:25.581 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:25.581 "is_configured": false, 00:15:25.581 "data_offset": 0, 00:15:25.581 "data_size": 63488 00:15:25.581 }, 00:15:25.581 { 00:15:25.581 "name": "BaseBdev2", 00:15:25.581 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:25.581 "is_configured": true, 00:15:25.581 "data_offset": 2048, 00:15:25.581 "data_size": 63488 00:15:25.581 }, 00:15:25.581 { 00:15:25.581 "name": "BaseBdev3", 00:15:25.581 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:25.581 "is_configured": true, 00:15:25.581 "data_offset": 2048, 00:15:25.581 "data_size": 63488 00:15:25.581 } 00:15:25.581 ] 00:15:25.581 }' 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.581 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.150 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4088a626-cada-4713-b13f-3107514790d4 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.151 [2024-12-09 22:56:41.881593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:26.151 [2024-12-09 22:56:41.881973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:26.151 [2024-12-09 22:56:41.882032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:26.151 [2024-12-09 22:56:41.882385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:26.151 [2024-12-09 22:56:41.882625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:26.151 [2024-12-09 22:56:41.882674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:15:26.151 id_bdev 0x617000008200 00:15:26.151 [2024-12-09 22:56:41.882894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.151 [ 00:15:26.151 { 00:15:26.151 "name": "NewBaseBdev", 00:15:26.151 "aliases": [ 00:15:26.151 "4088a626-cada-4713-b13f-3107514790d4" 00:15:26.151 ], 00:15:26.151 "product_name": "Malloc disk", 00:15:26.151 "block_size": 512, 00:15:26.151 "num_blocks": 65536, 00:15:26.151 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:26.151 "assigned_rate_limits": { 00:15:26.151 "rw_ios_per_sec": 0, 00:15:26.151 "rw_mbytes_per_sec": 0, 00:15:26.151 "r_mbytes_per_sec": 0, 00:15:26.151 "w_mbytes_per_sec": 0 00:15:26.151 }, 00:15:26.151 "claimed": true, 00:15:26.151 "claim_type": "exclusive_write", 00:15:26.151 "zoned": false, 00:15:26.151 "supported_io_types": { 00:15:26.151 "read": true, 00:15:26.151 "write": true, 00:15:26.151 "unmap": true, 00:15:26.151 "flush": true, 00:15:26.151 "reset": true, 00:15:26.151 "nvme_admin": false, 00:15:26.151 "nvme_io": false, 00:15:26.151 "nvme_io_md": false, 00:15:26.151 "write_zeroes": true, 00:15:26.151 "zcopy": true, 00:15:26.151 "get_zone_info": false, 00:15:26.151 "zone_management": false, 00:15:26.151 "zone_append": false, 00:15:26.151 "compare": false, 00:15:26.151 "compare_and_write": false, 00:15:26.151 "abort": true, 00:15:26.151 "seek_hole": false, 00:15:26.151 "seek_data": false, 00:15:26.151 "copy": true, 00:15:26.151 "nvme_iov_md": false 00:15:26.151 }, 00:15:26.151 "memory_domains": [ 00:15:26.151 { 00:15:26.151 "dma_device_id": "system", 00:15:26.151 "dma_device_type": 1 00:15:26.151 }, 00:15:26.151 { 00:15:26.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.151 "dma_device_type": 2 00:15:26.151 } 00:15:26.151 ], 00:15:26.151 "driver_specific": {} 00:15:26.151 } 00:15:26.151 ] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.151 "name": "Existed_Raid", 00:15:26.151 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:26.151 "strip_size_kb": 64, 00:15:26.151 "state": "online", 00:15:26.151 "raid_level": "raid0", 00:15:26.151 "superblock": true, 00:15:26.151 "num_base_bdevs": 3, 00:15:26.151 "num_base_bdevs_discovered": 3, 00:15:26.151 "num_base_bdevs_operational": 3, 00:15:26.151 "base_bdevs_list": [ 00:15:26.151 { 00:15:26.151 "name": "NewBaseBdev", 00:15:26.151 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:26.151 "is_configured": true, 00:15:26.151 "data_offset": 2048, 00:15:26.151 "data_size": 63488 00:15:26.151 }, 00:15:26.151 { 00:15:26.151 "name": "BaseBdev2", 00:15:26.151 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:26.151 "is_configured": true, 00:15:26.151 "data_offset": 2048, 00:15:26.151 "data_size": 63488 00:15:26.151 }, 00:15:26.151 { 00:15:26.151 "name": "BaseBdev3", 00:15:26.151 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:26.151 "is_configured": true, 00:15:26.151 "data_offset": 2048, 00:15:26.151 "data_size": 63488 00:15:26.151 } 00:15:26.151 ] 00:15:26.151 }' 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.151 22:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.720 [2024-12-09 22:56:42.433117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.720 "name": "Existed_Raid", 00:15:26.720 "aliases": [ 00:15:26.720 "abf8631a-4811-4c74-b90b-199fc2bf3e58" 00:15:26.720 ], 00:15:26.720 "product_name": "Raid Volume", 00:15:26.720 "block_size": 512, 00:15:26.720 "num_blocks": 190464, 00:15:26.720 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:26.720 "assigned_rate_limits": { 00:15:26.720 "rw_ios_per_sec": 0, 00:15:26.720 "rw_mbytes_per_sec": 0, 00:15:26.720 "r_mbytes_per_sec": 0, 00:15:26.720 "w_mbytes_per_sec": 0 00:15:26.720 }, 00:15:26.720 "claimed": false, 00:15:26.720 "zoned": false, 00:15:26.720 "supported_io_types": { 00:15:26.720 "read": true, 00:15:26.720 "write": true, 00:15:26.720 "unmap": true, 00:15:26.720 "flush": true, 00:15:26.720 "reset": true, 00:15:26.720 "nvme_admin": false, 00:15:26.720 "nvme_io": false, 00:15:26.720 "nvme_io_md": false, 00:15:26.720 "write_zeroes": true, 00:15:26.720 "zcopy": false, 00:15:26.720 "get_zone_info": false, 00:15:26.720 "zone_management": false, 00:15:26.720 "zone_append": false, 00:15:26.720 "compare": false, 00:15:26.720 "compare_and_write": false, 00:15:26.720 "abort": false, 00:15:26.720 "seek_hole": false, 00:15:26.720 "seek_data": false, 00:15:26.720 "copy": false, 00:15:26.720 "nvme_iov_md": false 00:15:26.720 }, 00:15:26.720 "memory_domains": [ 00:15:26.720 { 00:15:26.720 "dma_device_id": "system", 00:15:26.720 "dma_device_type": 1 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.720 "dma_device_type": 2 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "dma_device_id": "system", 00:15:26.720 "dma_device_type": 1 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.720 "dma_device_type": 2 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "dma_device_id": "system", 00:15:26.720 "dma_device_type": 1 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.720 "dma_device_type": 2 00:15:26.720 } 00:15:26.720 ], 00:15:26.720 "driver_specific": { 00:15:26.720 "raid": { 00:15:26.720 "uuid": "abf8631a-4811-4c74-b90b-199fc2bf3e58", 00:15:26.720 "strip_size_kb": 64, 00:15:26.720 "state": "online", 00:15:26.720 "raid_level": "raid0", 00:15:26.720 "superblock": true, 00:15:26.720 "num_base_bdevs": 3, 00:15:26.720 "num_base_bdevs_discovered": 3, 00:15:26.720 "num_base_bdevs_operational": 3, 00:15:26.720 "base_bdevs_list": [ 00:15:26.720 { 00:15:26.720 "name": "NewBaseBdev", 00:15:26.720 "uuid": "4088a626-cada-4713-b13f-3107514790d4", 00:15:26.720 "is_configured": true, 00:15:26.720 "data_offset": 2048, 00:15:26.720 "data_size": 63488 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "name": "BaseBdev2", 00:15:26.720 "uuid": "d19cbb86-e41d-454d-abfd-91f35c3bb3ae", 00:15:26.720 "is_configured": true, 00:15:26.720 "data_offset": 2048, 00:15:26.720 "data_size": 63488 00:15:26.720 }, 00:15:26.720 { 00:15:26.720 "name": "BaseBdev3", 00:15:26.720 "uuid": "7abf860a-7059-449c-ac65-9938335884fe", 00:15:26.720 "is_configured": true, 00:15:26.720 "data_offset": 2048, 00:15:26.720 "data_size": 63488 00:15:26.720 } 00:15:26.720 ] 00:15:26.720 } 00:15:26.720 } 00:15:26.720 }' 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:26.720 BaseBdev2 00:15:26.720 BaseBdev3' 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.720 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.981 [2024-12-09 22:56:42.728311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.981 [2024-12-09 22:56:42.728397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.981 [2024-12-09 22:56:42.728593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.981 [2024-12-09 22:56:42.728703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.981 [2024-12-09 22:56:42.728750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64916 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64916 ']' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64916 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64916 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64916' 00:15:26.981 killing process with pid 64916 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64916 00:15:26.981 [2024-12-09 22:56:42.777178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.981 22:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64916 00:15:27.550 [2024-12-09 22:56:43.133765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.954 22:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:28.954 00:15:28.954 real 0m11.442s 00:15:28.954 user 0m17.919s 00:15:28.954 sys 0m2.100s 00:15:28.954 ************************************ 00:15:28.954 END TEST raid_state_function_test_sb 00:15:28.954 ************************************ 00:15:28.954 22:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.954 22:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.954 22:56:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:28.954 22:56:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:28.954 22:56:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.954 22:56:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.954 ************************************ 00:15:28.954 START TEST raid_superblock_test 00:15:28.954 ************************************ 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65547 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65547 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65547 ']' 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.954 22:56:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.954 [2024-12-09 22:56:44.640800] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:28.954 [2024-12-09 22:56:44.641064] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65547 ] 00:15:28.954 [2024-12-09 22:56:44.804490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.214 [2024-12-09 22:56:44.950553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.473 [2024-12-09 22:56:45.208941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.473 [2024-12-09 22:56:45.209069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.733 malloc1 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.733 [2024-12-09 22:56:45.576363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.733 [2024-12-09 22:56:45.576458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.733 [2024-12-09 22:56:45.576498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:29.733 [2024-12-09 22:56:45.576526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.733 [2024-12-09 22:56:45.579276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.733 [2024-12-09 22:56:45.579316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.733 pt1 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.733 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 malloc2 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 [2024-12-09 22:56:45.639687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.994 [2024-12-09 22:56:45.639812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.994 [2024-12-09 22:56:45.639855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:29.994 [2024-12-09 22:56:45.639888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.994 [2024-12-09 22:56:45.642634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.994 [2024-12-09 22:56:45.642704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.994 pt2 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 malloc3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 [2024-12-09 22:56:45.720706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:29.994 [2024-12-09 22:56:45.720887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.994 [2024-12-09 22:56:45.720944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:29.994 [2024-12-09 22:56:45.721035] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.994 [2024-12-09 22:56:45.723942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.994 [2024-12-09 22:56:45.724038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:29.994 pt3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 [2024-12-09 22:56:45.732879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.994 [2024-12-09 22:56:45.735322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.994 [2024-12-09 22:56:45.735469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:29.994 [2024-12-09 22:56:45.735733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:29.994 [2024-12-09 22:56:45.735783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:29.994 [2024-12-09 22:56:45.736140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:29.994 [2024-12-09 22:56:45.736364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:29.994 [2024-12-09 22:56:45.736412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:29.994 [2024-12-09 22:56:45.736787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.994 "name": "raid_bdev1", 00:15:29.994 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:29.994 "strip_size_kb": 64, 00:15:29.994 "state": "online", 00:15:29.994 "raid_level": "raid0", 00:15:29.994 "superblock": true, 00:15:29.994 "num_base_bdevs": 3, 00:15:29.994 "num_base_bdevs_discovered": 3, 00:15:29.994 "num_base_bdevs_operational": 3, 00:15:29.994 "base_bdevs_list": [ 00:15:29.994 { 00:15:29.994 "name": "pt1", 00:15:29.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.994 "is_configured": true, 00:15:29.994 "data_offset": 2048, 00:15:29.994 "data_size": 63488 00:15:29.994 }, 00:15:29.994 { 00:15:29.994 "name": "pt2", 00:15:29.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.994 "is_configured": true, 00:15:29.994 "data_offset": 2048, 00:15:29.994 "data_size": 63488 00:15:29.994 }, 00:15:29.994 { 00:15:29.994 "name": "pt3", 00:15:29.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:29.994 "is_configured": true, 00:15:29.994 "data_offset": 2048, 00:15:29.994 "data_size": 63488 00:15:29.994 } 00:15:29.994 ] 00:15:29.994 }' 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.994 22:56:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.563 [2024-12-09 22:56:46.204579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.563 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:30.563 "name": "raid_bdev1", 00:15:30.563 "aliases": [ 00:15:30.563 "446fef58-eb83-44a4-b328-b3ce2b65faf0" 00:15:30.563 ], 00:15:30.563 "product_name": "Raid Volume", 00:15:30.563 "block_size": 512, 00:15:30.563 "num_blocks": 190464, 00:15:30.563 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:30.563 "assigned_rate_limits": { 00:15:30.563 "rw_ios_per_sec": 0, 00:15:30.563 "rw_mbytes_per_sec": 0, 00:15:30.563 "r_mbytes_per_sec": 0, 00:15:30.563 "w_mbytes_per_sec": 0 00:15:30.563 }, 00:15:30.563 "claimed": false, 00:15:30.563 "zoned": false, 00:15:30.563 "supported_io_types": { 00:15:30.563 "read": true, 00:15:30.563 "write": true, 00:15:30.563 "unmap": true, 00:15:30.563 "flush": true, 00:15:30.563 "reset": true, 00:15:30.563 "nvme_admin": false, 00:15:30.563 "nvme_io": false, 00:15:30.563 "nvme_io_md": false, 00:15:30.563 "write_zeroes": true, 00:15:30.563 "zcopy": false, 00:15:30.563 "get_zone_info": false, 00:15:30.563 "zone_management": false, 00:15:30.563 "zone_append": false, 00:15:30.563 "compare": false, 00:15:30.563 "compare_and_write": false, 00:15:30.563 "abort": false, 00:15:30.563 "seek_hole": false, 00:15:30.563 "seek_data": false, 00:15:30.563 "copy": false, 00:15:30.563 "nvme_iov_md": false 00:15:30.563 }, 00:15:30.563 "memory_domains": [ 00:15:30.563 { 00:15:30.563 "dma_device_id": "system", 00:15:30.563 "dma_device_type": 1 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.563 "dma_device_type": 2 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "dma_device_id": "system", 00:15:30.563 "dma_device_type": 1 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.563 "dma_device_type": 2 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "dma_device_id": "system", 00:15:30.563 "dma_device_type": 1 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.563 "dma_device_type": 2 00:15:30.563 } 00:15:30.563 ], 00:15:30.563 "driver_specific": { 00:15:30.563 "raid": { 00:15:30.563 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:30.563 "strip_size_kb": 64, 00:15:30.563 "state": "online", 00:15:30.563 "raid_level": "raid0", 00:15:30.563 "superblock": true, 00:15:30.563 "num_base_bdevs": 3, 00:15:30.563 "num_base_bdevs_discovered": 3, 00:15:30.563 "num_base_bdevs_operational": 3, 00:15:30.563 "base_bdevs_list": [ 00:15:30.563 { 00:15:30.563 "name": "pt1", 00:15:30.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:30.563 "is_configured": true, 00:15:30.563 "data_offset": 2048, 00:15:30.563 "data_size": 63488 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "name": "pt2", 00:15:30.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.564 "is_configured": true, 00:15:30.564 "data_offset": 2048, 00:15:30.564 "data_size": 63488 00:15:30.564 }, 00:15:30.564 { 00:15:30.564 "name": "pt3", 00:15:30.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:30.564 "is_configured": true, 00:15:30.564 "data_offset": 2048, 00:15:30.564 "data_size": 63488 00:15:30.564 } 00:15:30.564 ] 00:15:30.564 } 00:15:30.564 } 00:15:30.564 }' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:30.564 pt2 00:15:30.564 pt3' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.564 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 [2024-12-09 22:56:46.495958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=446fef58-eb83-44a4-b328-b3ce2b65faf0 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 446fef58-eb83-44a4-b328-b3ce2b65faf0 ']' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 [2024-12-09 22:56:46.539567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.824 [2024-12-09 22:56:46.539642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.824 [2024-12-09 22:56:46.539795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.824 [2024-12-09 22:56:46.539914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.824 [2024-12-09 22:56:46.539965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.824 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.084 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.084 [2024-12-09 22:56:46.699361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:31.084 [2024-12-09 22:56:46.701828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:31.084 [2024-12-09 22:56:46.701952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:31.084 [2024-12-09 22:56:46.702024] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:31.084 [2024-12-09 22:56:46.702090] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:31.084 [2024-12-09 22:56:46.702112] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:31.085 [2024-12-09 22:56:46.702132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.085 [2024-12-09 22:56:46.702145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:31.085 request: 00:15:31.085 { 00:15:31.085 "name": "raid_bdev1", 00:15:31.085 "raid_level": "raid0", 00:15:31.085 "base_bdevs": [ 00:15:31.085 "malloc1", 00:15:31.085 "malloc2", 00:15:31.085 "malloc3" 00:15:31.085 ], 00:15:31.085 "strip_size_kb": 64, 00:15:31.085 "superblock": false, 00:15:31.085 "method": "bdev_raid_create", 00:15:31.085 "req_id": 1 00:15:31.085 } 00:15:31.085 Got JSON-RPC error response 00:15:31.085 response: 00:15:31.085 { 00:15:31.085 "code": -17, 00:15:31.085 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:31.085 } 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 [2024-12-09 22:56:46.755187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.085 [2024-12-09 22:56:46.755314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.085 [2024-12-09 22:56:46.755362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:31.085 [2024-12-09 22:56:46.755401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.085 [2024-12-09 22:56:46.758377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.085 [2024-12-09 22:56:46.758476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.085 [2024-12-09 22:56:46.758638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:31.085 [2024-12-09 22:56:46.758751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.085 pt1 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.085 "name": "raid_bdev1", 00:15:31.085 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:31.085 "strip_size_kb": 64, 00:15:31.085 "state": "configuring", 00:15:31.085 "raid_level": "raid0", 00:15:31.085 "superblock": true, 00:15:31.085 "num_base_bdevs": 3, 00:15:31.085 "num_base_bdevs_discovered": 1, 00:15:31.085 "num_base_bdevs_operational": 3, 00:15:31.085 "base_bdevs_list": [ 00:15:31.085 { 00:15:31.085 "name": "pt1", 00:15:31.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.085 "is_configured": true, 00:15:31.085 "data_offset": 2048, 00:15:31.085 "data_size": 63488 00:15:31.085 }, 00:15:31.085 { 00:15:31.085 "name": null, 00:15:31.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.085 "is_configured": false, 00:15:31.085 "data_offset": 2048, 00:15:31.085 "data_size": 63488 00:15:31.085 }, 00:15:31.085 { 00:15:31.085 "name": null, 00:15:31.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.085 "is_configured": false, 00:15:31.085 "data_offset": 2048, 00:15:31.085 "data_size": 63488 00:15:31.085 } 00:15:31.085 ] 00:15:31.085 }' 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.085 22:56:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.345 [2024-12-09 22:56:47.182489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.345 [2024-12-09 22:56:47.182582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.345 [2024-12-09 22:56:47.182616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:31.345 [2024-12-09 22:56:47.182629] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.345 [2024-12-09 22:56:47.183197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.345 [2024-12-09 22:56:47.183218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.345 [2024-12-09 22:56:47.183327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:31.345 [2024-12-09 22:56:47.183359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.345 pt2 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.345 [2024-12-09 22:56:47.194486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.345 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.604 "name": "raid_bdev1", 00:15:31.604 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:31.604 "strip_size_kb": 64, 00:15:31.604 "state": "configuring", 00:15:31.604 "raid_level": "raid0", 00:15:31.604 "superblock": true, 00:15:31.604 "num_base_bdevs": 3, 00:15:31.604 "num_base_bdevs_discovered": 1, 00:15:31.604 "num_base_bdevs_operational": 3, 00:15:31.604 "base_bdevs_list": [ 00:15:31.604 { 00:15:31.604 "name": "pt1", 00:15:31.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.604 "is_configured": true, 00:15:31.604 "data_offset": 2048, 00:15:31.604 "data_size": 63488 00:15:31.604 }, 00:15:31.604 { 00:15:31.604 "name": null, 00:15:31.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.604 "is_configured": false, 00:15:31.604 "data_offset": 0, 00:15:31.604 "data_size": 63488 00:15:31.604 }, 00:15:31.604 { 00:15:31.604 "name": null, 00:15:31.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.604 "is_configured": false, 00:15:31.604 "data_offset": 2048, 00:15:31.604 "data_size": 63488 00:15:31.604 } 00:15:31.604 ] 00:15:31.604 }' 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.604 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.911 [2024-12-09 22:56:47.649689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.911 [2024-12-09 22:56:47.649835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.911 [2024-12-09 22:56:47.649879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:31.911 [2024-12-09 22:56:47.649916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.911 [2024-12-09 22:56:47.650578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.911 [2024-12-09 22:56:47.650643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.911 [2024-12-09 22:56:47.650790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:31.911 [2024-12-09 22:56:47.650854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.911 pt2 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.911 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.911 [2024-12-09 22:56:47.661622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:31.912 [2024-12-09 22:56:47.661740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.912 [2024-12-09 22:56:47.661784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:31.912 [2024-12-09 22:56:47.661830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.912 [2024-12-09 22:56:47.662344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.912 [2024-12-09 22:56:47.662411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:31.912 [2024-12-09 22:56:47.662524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:31.912 [2024-12-09 22:56:47.662588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:31.912 [2024-12-09 22:56:47.662792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:31.912 [2024-12-09 22:56:47.662842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:31.912 [2024-12-09 22:56:47.663154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:31.912 [2024-12-09 22:56:47.663374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:31.912 [2024-12-09 22:56:47.663416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:31.912 [2024-12-09 22:56:47.663645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.912 pt3 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.912 "name": "raid_bdev1", 00:15:31.912 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:31.912 "strip_size_kb": 64, 00:15:31.912 "state": "online", 00:15:31.912 "raid_level": "raid0", 00:15:31.912 "superblock": true, 00:15:31.912 "num_base_bdevs": 3, 00:15:31.912 "num_base_bdevs_discovered": 3, 00:15:31.912 "num_base_bdevs_operational": 3, 00:15:31.912 "base_bdevs_list": [ 00:15:31.912 { 00:15:31.912 "name": "pt1", 00:15:31.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.912 "is_configured": true, 00:15:31.912 "data_offset": 2048, 00:15:31.912 "data_size": 63488 00:15:31.912 }, 00:15:31.912 { 00:15:31.912 "name": "pt2", 00:15:31.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.912 "is_configured": true, 00:15:31.912 "data_offset": 2048, 00:15:31.912 "data_size": 63488 00:15:31.912 }, 00:15:31.912 { 00:15:31.912 "name": "pt3", 00:15:31.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.912 "is_configured": true, 00:15:31.912 "data_offset": 2048, 00:15:31.912 "data_size": 63488 00:15:31.912 } 00:15:31.912 ] 00:15:31.912 }' 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.912 22:56:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.500 [2024-12-09 22:56:48.137290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.500 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.500 "name": "raid_bdev1", 00:15:32.500 "aliases": [ 00:15:32.500 "446fef58-eb83-44a4-b328-b3ce2b65faf0" 00:15:32.500 ], 00:15:32.500 "product_name": "Raid Volume", 00:15:32.500 "block_size": 512, 00:15:32.500 "num_blocks": 190464, 00:15:32.500 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:32.500 "assigned_rate_limits": { 00:15:32.500 "rw_ios_per_sec": 0, 00:15:32.500 "rw_mbytes_per_sec": 0, 00:15:32.500 "r_mbytes_per_sec": 0, 00:15:32.500 "w_mbytes_per_sec": 0 00:15:32.500 }, 00:15:32.500 "claimed": false, 00:15:32.500 "zoned": false, 00:15:32.500 "supported_io_types": { 00:15:32.500 "read": true, 00:15:32.500 "write": true, 00:15:32.500 "unmap": true, 00:15:32.500 "flush": true, 00:15:32.500 "reset": true, 00:15:32.500 "nvme_admin": false, 00:15:32.500 "nvme_io": false, 00:15:32.500 "nvme_io_md": false, 00:15:32.500 "write_zeroes": true, 00:15:32.500 "zcopy": false, 00:15:32.500 "get_zone_info": false, 00:15:32.500 "zone_management": false, 00:15:32.500 "zone_append": false, 00:15:32.500 "compare": false, 00:15:32.500 "compare_and_write": false, 00:15:32.500 "abort": false, 00:15:32.500 "seek_hole": false, 00:15:32.500 "seek_data": false, 00:15:32.500 "copy": false, 00:15:32.500 "nvme_iov_md": false 00:15:32.500 }, 00:15:32.500 "memory_domains": [ 00:15:32.500 { 00:15:32.500 "dma_device_id": "system", 00:15:32.500 "dma_device_type": 1 00:15:32.500 }, 00:15:32.500 { 00:15:32.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.500 "dma_device_type": 2 00:15:32.500 }, 00:15:32.500 { 00:15:32.500 "dma_device_id": "system", 00:15:32.500 "dma_device_type": 1 00:15:32.500 }, 00:15:32.500 { 00:15:32.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.500 "dma_device_type": 2 00:15:32.501 }, 00:15:32.501 { 00:15:32.501 "dma_device_id": "system", 00:15:32.501 "dma_device_type": 1 00:15:32.501 }, 00:15:32.501 { 00:15:32.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.501 "dma_device_type": 2 00:15:32.501 } 00:15:32.501 ], 00:15:32.501 "driver_specific": { 00:15:32.501 "raid": { 00:15:32.501 "uuid": "446fef58-eb83-44a4-b328-b3ce2b65faf0", 00:15:32.501 "strip_size_kb": 64, 00:15:32.501 "state": "online", 00:15:32.501 "raid_level": "raid0", 00:15:32.501 "superblock": true, 00:15:32.501 "num_base_bdevs": 3, 00:15:32.501 "num_base_bdevs_discovered": 3, 00:15:32.501 "num_base_bdevs_operational": 3, 00:15:32.501 "base_bdevs_list": [ 00:15:32.501 { 00:15:32.501 "name": "pt1", 00:15:32.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.501 "is_configured": true, 00:15:32.501 "data_offset": 2048, 00:15:32.501 "data_size": 63488 00:15:32.501 }, 00:15:32.501 { 00:15:32.501 "name": "pt2", 00:15:32.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.501 "is_configured": true, 00:15:32.501 "data_offset": 2048, 00:15:32.501 "data_size": 63488 00:15:32.501 }, 00:15:32.501 { 00:15:32.501 "name": "pt3", 00:15:32.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.501 "is_configured": true, 00:15:32.501 "data_offset": 2048, 00:15:32.501 "data_size": 63488 00:15:32.501 } 00:15:32.501 ] 00:15:32.501 } 00:15:32.501 } 00:15:32.501 }' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:32.501 pt2 00:15:32.501 pt3' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.501 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.761 [2024-12-09 22:56:48.424925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 446fef58-eb83-44a4-b328-b3ce2b65faf0 '!=' 446fef58-eb83-44a4-b328-b3ce2b65faf0 ']' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65547 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65547 ']' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65547 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65547 00:15:32.761 killing process with pid 65547 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65547' 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65547 00:15:32.761 22:56:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65547 00:15:32.761 [2024-12-09 22:56:48.494726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.761 [2024-12-09 22:56:48.494886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.761 [2024-12-09 22:56:48.494975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.761 [2024-12-09 22:56:48.494991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:33.021 [2024-12-09 22:56:48.867913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.400 22:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:34.401 00:15:34.401 real 0m5.714s 00:15:34.401 user 0m7.948s 00:15:34.401 sys 0m1.084s 00:15:34.401 22:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.401 22:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.401 ************************************ 00:15:34.401 END TEST raid_superblock_test 00:15:34.401 ************************************ 00:15:34.659 22:56:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:15:34.659 22:56:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:34.659 22:56:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.659 22:56:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.659 ************************************ 00:15:34.659 START TEST raid_read_error_test 00:15:34.659 ************************************ 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fKcl79Bk6B 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65808 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65808 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65808 ']' 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.660 22:56:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.660 [2024-12-09 22:56:50.442153] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:34.660 [2024-12-09 22:56:50.442325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65808 ] 00:15:34.918 [2024-12-09 22:56:50.626250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.176 [2024-12-09 22:56:50.777948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.176 [2024-12-09 22:56:51.028313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.176 [2024-12-09 22:56:51.028408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 BaseBdev1_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 true 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 [2024-12-09 22:56:51.378514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:35.743 [2024-12-09 22:56:51.378579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.743 [2024-12-09 22:56:51.378601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:35.743 [2024-12-09 22:56:51.378613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.743 [2024-12-09 22:56:51.381144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.743 [2024-12-09 22:56:51.381186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:35.743 BaseBdev1 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 BaseBdev2_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 true 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 [2024-12-09 22:56:51.453087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:35.743 [2024-12-09 22:56:51.453181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.743 [2024-12-09 22:56:51.453203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:35.743 [2024-12-09 22:56:51.453217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.743 [2024-12-09 22:56:51.455982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.743 [2024-12-09 22:56:51.456029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:35.743 BaseBdev2 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 BaseBdev3_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 true 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 [2024-12-09 22:56:51.545751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:35.743 [2024-12-09 22:56:51.545828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.743 [2024-12-09 22:56:51.545866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:35.743 [2024-12-09 22:56:51.545889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.743 [2024-12-09 22:56:51.548590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.743 [2024-12-09 22:56:51.548632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:35.743 BaseBdev3 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.743 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.743 [2024-12-09 22:56:51.557818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.743 [2024-12-09 22:56:51.559996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.744 [2024-12-09 22:56:51.560069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.744 [2024-12-09 22:56:51.560277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:35.744 [2024-12-09 22:56:51.560293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:35.744 [2024-12-09 22:56:51.560670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:35.744 [2024-12-09 22:56:51.560898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:35.744 [2024-12-09 22:56:51.560955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:35.744 [2024-12-09 22:56:51.561173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.744 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.002 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.002 "name": "raid_bdev1", 00:15:36.002 "uuid": "2d063e5f-8d4a-445b-9a56-ac2d793587b9", 00:15:36.002 "strip_size_kb": 64, 00:15:36.002 "state": "online", 00:15:36.002 "raid_level": "raid0", 00:15:36.002 "superblock": true, 00:15:36.002 "num_base_bdevs": 3, 00:15:36.002 "num_base_bdevs_discovered": 3, 00:15:36.002 "num_base_bdevs_operational": 3, 00:15:36.002 "base_bdevs_list": [ 00:15:36.002 { 00:15:36.002 "name": "BaseBdev1", 00:15:36.002 "uuid": "6fef769d-3253-54d4-b90a-17fd30e7d095", 00:15:36.002 "is_configured": true, 00:15:36.002 "data_offset": 2048, 00:15:36.002 "data_size": 63488 00:15:36.002 }, 00:15:36.002 { 00:15:36.002 "name": "BaseBdev2", 00:15:36.002 "uuid": "3ecdb7b1-92d3-5eab-9ce7-ad37f5c699ae", 00:15:36.002 "is_configured": true, 00:15:36.002 "data_offset": 2048, 00:15:36.002 "data_size": 63488 00:15:36.002 }, 00:15:36.002 { 00:15:36.002 "name": "BaseBdev3", 00:15:36.002 "uuid": "cf38e2b5-b0ab-5752-b8bc-91a1329e70a0", 00:15:36.002 "is_configured": true, 00:15:36.002 "data_offset": 2048, 00:15:36.002 "data_size": 63488 00:15:36.002 } 00:15:36.002 ] 00:15:36.002 }' 00:15:36.002 22:56:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.002 22:56:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.260 22:56:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:36.260 22:56:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:36.529 [2024-12-09 22:56:52.122638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.465 "name": "raid_bdev1", 00:15:37.465 "uuid": "2d063e5f-8d4a-445b-9a56-ac2d793587b9", 00:15:37.465 "strip_size_kb": 64, 00:15:37.465 "state": "online", 00:15:37.465 "raid_level": "raid0", 00:15:37.465 "superblock": true, 00:15:37.465 "num_base_bdevs": 3, 00:15:37.465 "num_base_bdevs_discovered": 3, 00:15:37.465 "num_base_bdevs_operational": 3, 00:15:37.465 "base_bdevs_list": [ 00:15:37.465 { 00:15:37.465 "name": "BaseBdev1", 00:15:37.465 "uuid": "6fef769d-3253-54d4-b90a-17fd30e7d095", 00:15:37.465 "is_configured": true, 00:15:37.465 "data_offset": 2048, 00:15:37.465 "data_size": 63488 00:15:37.465 }, 00:15:37.465 { 00:15:37.465 "name": "BaseBdev2", 00:15:37.465 "uuid": "3ecdb7b1-92d3-5eab-9ce7-ad37f5c699ae", 00:15:37.465 "is_configured": true, 00:15:37.465 "data_offset": 2048, 00:15:37.465 "data_size": 63488 00:15:37.465 }, 00:15:37.465 { 00:15:37.465 "name": "BaseBdev3", 00:15:37.465 "uuid": "cf38e2b5-b0ab-5752-b8bc-91a1329e70a0", 00:15:37.465 "is_configured": true, 00:15:37.465 "data_offset": 2048, 00:15:37.465 "data_size": 63488 00:15:37.465 } 00:15:37.465 ] 00:15:37.465 }' 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.465 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.723 [2024-12-09 22:56:53.468707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.723 [2024-12-09 22:56:53.468818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.723 [2024-12-09 22:56:53.471978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.723 [2024-12-09 22:56:53.472087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.723 [2024-12-09 22:56:53.472171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.723 [2024-12-09 22:56:53.472230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:37.723 { 00:15:37.723 "results": [ 00:15:37.723 { 00:15:37.723 "job": "raid_bdev1", 00:15:37.723 "core_mask": "0x1", 00:15:37.723 "workload": "randrw", 00:15:37.723 "percentage": 50, 00:15:37.723 "status": "finished", 00:15:37.723 "queue_depth": 1, 00:15:37.723 "io_size": 131072, 00:15:37.723 "runtime": 1.34621, 00:15:37.723 "iops": 12220.975925004272, 00:15:37.723 "mibps": 1527.621990625534, 00:15:37.723 "io_failed": 1, 00:15:37.723 "io_timeout": 0, 00:15:37.723 "avg_latency_us": 114.76165093264206, 00:15:37.723 "min_latency_us": 25.4882096069869, 00:15:37.723 "max_latency_us": 1509.6174672489083 00:15:37.723 } 00:15:37.723 ], 00:15:37.723 "core_count": 1 00:15:37.723 } 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65808 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65808 ']' 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65808 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65808 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65808' 00:15:37.723 killing process with pid 65808 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65808 00:15:37.723 [2024-12-09 22:56:53.521202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.723 22:56:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65808 00:15:37.983 [2024-12-09 22:56:53.799127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fKcl79Bk6B 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:39.449 ************************************ 00:15:39.449 END TEST raid_read_error_test 00:15:39.449 ************************************ 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:15:39.449 00:15:39.449 real 0m4.915s 00:15:39.449 user 0m5.690s 00:15:39.449 sys 0m0.696s 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.449 22:56:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.449 22:56:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:15:39.449 22:56:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:39.449 22:56:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.449 22:56:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.707 ************************************ 00:15:39.707 START TEST raid_write_error_test 00:15:39.707 ************************************ 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:39.707 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pPvVGBrmQW 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65953 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65953 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65953 ']' 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.708 22:56:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.708 [2024-12-09 22:56:55.422956] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:39.708 [2024-12-09 22:56:55.423084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65953 ] 00:15:39.965 [2024-12-09 22:56:55.600409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.965 [2024-12-09 22:56:55.758345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.223 [2024-12-09 22:56:56.023681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.223 [2024-12-09 22:56:56.023749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.482 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.482 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:40.482 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.482 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:40.482 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.482 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.741 BaseBdev1_malloc 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.741 true 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.741 [2024-12-09 22:56:56.388617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:40.741 [2024-12-09 22:56:56.388724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.741 [2024-12-09 22:56:56.388757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:40.741 [2024-12-09 22:56:56.388772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.741 [2024-12-09 22:56:56.391577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.741 [2024-12-09 22:56:56.391704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.741 BaseBdev1 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.741 BaseBdev2_malloc 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.741 true 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.741 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.742 [2024-12-09 22:56:56.464197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:40.742 [2024-12-09 22:56:56.464268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.742 [2024-12-09 22:56:56.464289] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:40.742 [2024-12-09 22:56:56.464301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.742 [2024-12-09 22:56:56.467109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.742 [2024-12-09 22:56:56.467195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.742 BaseBdev2 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.742 BaseBdev3_malloc 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.742 true 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.742 [2024-12-09 22:56:56.553991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:40.742 [2024-12-09 22:56:56.554108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.742 [2024-12-09 22:56:56.554137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:40.742 [2024-12-09 22:56:56.554150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.742 [2024-12-09 22:56:56.557113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.742 [2024-12-09 22:56:56.557217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:40.742 BaseBdev3 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.742 [2024-12-09 22:56:56.566206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.742 [2024-12-09 22:56:56.568706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.742 [2024-12-09 22:56:56.568797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.742 [2024-12-09 22:56:56.569059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:40.742 [2024-12-09 22:56:56.569077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:40.742 [2024-12-09 22:56:56.569411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:40.742 [2024-12-09 22:56:56.569652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:40.742 [2024-12-09 22:56:56.569671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:40.742 [2024-12-09 22:56:56.569877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.742 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.000 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.000 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.000 "name": "raid_bdev1", 00:15:41.000 "uuid": "0ff31f87-b465-40d1-a307-1a87beaaa42a", 00:15:41.000 "strip_size_kb": 64, 00:15:41.000 "state": "online", 00:15:41.000 "raid_level": "raid0", 00:15:41.000 "superblock": true, 00:15:41.000 "num_base_bdevs": 3, 00:15:41.000 "num_base_bdevs_discovered": 3, 00:15:41.000 "num_base_bdevs_operational": 3, 00:15:41.000 "base_bdevs_list": [ 00:15:41.000 { 00:15:41.000 "name": "BaseBdev1", 00:15:41.000 "uuid": "fb2c1fb1-07ba-5944-bf1e-8805c7772320", 00:15:41.000 "is_configured": true, 00:15:41.000 "data_offset": 2048, 00:15:41.000 "data_size": 63488 00:15:41.000 }, 00:15:41.000 { 00:15:41.000 "name": "BaseBdev2", 00:15:41.000 "uuid": "0ca35fe0-8600-5841-8e35-0beffc7eb015", 00:15:41.000 "is_configured": true, 00:15:41.000 "data_offset": 2048, 00:15:41.000 "data_size": 63488 00:15:41.000 }, 00:15:41.000 { 00:15:41.000 "name": "BaseBdev3", 00:15:41.000 "uuid": "005f2c05-839c-5170-b235-04a16d9f4506", 00:15:41.000 "is_configured": true, 00:15:41.000 "data_offset": 2048, 00:15:41.000 "data_size": 63488 00:15:41.000 } 00:15:41.000 ] 00:15:41.000 }' 00:15:41.000 22:56:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.000 22:56:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.258 22:56:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:41.258 22:56:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:41.516 [2024-12-09 22:56:57.123017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.453 "name": "raid_bdev1", 00:15:42.453 "uuid": "0ff31f87-b465-40d1-a307-1a87beaaa42a", 00:15:42.453 "strip_size_kb": 64, 00:15:42.453 "state": "online", 00:15:42.453 "raid_level": "raid0", 00:15:42.453 "superblock": true, 00:15:42.453 "num_base_bdevs": 3, 00:15:42.453 "num_base_bdevs_discovered": 3, 00:15:42.453 "num_base_bdevs_operational": 3, 00:15:42.453 "base_bdevs_list": [ 00:15:42.453 { 00:15:42.453 "name": "BaseBdev1", 00:15:42.453 "uuid": "fb2c1fb1-07ba-5944-bf1e-8805c7772320", 00:15:42.453 "is_configured": true, 00:15:42.453 "data_offset": 2048, 00:15:42.453 "data_size": 63488 00:15:42.453 }, 00:15:42.453 { 00:15:42.453 "name": "BaseBdev2", 00:15:42.453 "uuid": "0ca35fe0-8600-5841-8e35-0beffc7eb015", 00:15:42.453 "is_configured": true, 00:15:42.453 "data_offset": 2048, 00:15:42.453 "data_size": 63488 00:15:42.453 }, 00:15:42.453 { 00:15:42.453 "name": "BaseBdev3", 00:15:42.453 "uuid": "005f2c05-839c-5170-b235-04a16d9f4506", 00:15:42.453 "is_configured": true, 00:15:42.453 "data_offset": 2048, 00:15:42.453 "data_size": 63488 00:15:42.453 } 00:15:42.453 ] 00:15:42.453 }' 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.453 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.713 [2024-12-09 22:56:58.529472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.713 [2024-12-09 22:56:58.529510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.713 [2024-12-09 22:56:58.532841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.713 [2024-12-09 22:56:58.532955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.713 [2024-12-09 22:56:58.533013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.713 [2024-12-09 22:56:58.533026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:42.713 { 00:15:42.713 "results": [ 00:15:42.713 { 00:15:42.713 "job": "raid_bdev1", 00:15:42.713 "core_mask": "0x1", 00:15:42.713 "workload": "randrw", 00:15:42.713 "percentage": 50, 00:15:42.713 "status": "finished", 00:15:42.713 "queue_depth": 1, 00:15:42.713 "io_size": 131072, 00:15:42.713 "runtime": 1.40668, 00:15:42.713 "iops": 11767.424005459663, 00:15:42.713 "mibps": 1470.9280006824579, 00:15:42.713 "io_failed": 1, 00:15:42.713 "io_timeout": 0, 00:15:42.713 "avg_latency_us": 119.09217830437689, 00:15:42.713 "min_latency_us": 23.699563318777294, 00:15:42.713 "max_latency_us": 1810.1100436681222 00:15:42.713 } 00:15:42.713 ], 00:15:42.713 "core_count": 1 00:15:42.713 } 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65953 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65953 ']' 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65953 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.713 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65953 00:15:42.972 killing process with pid 65953 00:15:42.972 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.972 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.972 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65953' 00:15:42.972 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65953 00:15:42.972 [2024-12-09 22:56:58.579316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.973 22:56:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65953 00:15:43.232 [2024-12-09 22:56:58.863754] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pPvVGBrmQW 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:44.625 ************************************ 00:15:44.625 END TEST raid_write_error_test 00:15:44.625 ************************************ 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:44.625 00:15:44.625 real 0m5.017s 00:15:44.625 user 0m5.833s 00:15:44.625 sys 0m0.710s 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.625 22:57:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.625 22:57:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:44.625 22:57:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:44.625 22:57:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:44.625 22:57:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.625 22:57:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.625 ************************************ 00:15:44.625 START TEST raid_state_function_test 00:15:44.625 ************************************ 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66098 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66098' 00:15:44.625 Process raid pid: 66098 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66098 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66098 ']' 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.625 22:57:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.885 [2024-12-09 22:57:00.516659] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:44.885 [2024-12-09 22:57:00.516806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.885 [2024-12-09 22:57:00.710303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.149 [2024-12-09 22:57:00.860040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.413 [2024-12-09 22:57:01.126969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.413 [2024-12-09 22:57:01.127033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.673 [2024-12-09 22:57:01.400068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.673 [2024-12-09 22:57:01.400194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.673 [2024-12-09 22:57:01.400212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.673 [2024-12-09 22:57:01.400226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.673 [2024-12-09 22:57:01.400233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.673 [2024-12-09 22:57:01.400244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.673 "name": "Existed_Raid", 00:15:45.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.673 "strip_size_kb": 64, 00:15:45.673 "state": "configuring", 00:15:45.673 "raid_level": "concat", 00:15:45.673 "superblock": false, 00:15:45.673 "num_base_bdevs": 3, 00:15:45.673 "num_base_bdevs_discovered": 0, 00:15:45.673 "num_base_bdevs_operational": 3, 00:15:45.673 "base_bdevs_list": [ 00:15:45.673 { 00:15:45.673 "name": "BaseBdev1", 00:15:45.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.673 "is_configured": false, 00:15:45.673 "data_offset": 0, 00:15:45.673 "data_size": 0 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "name": "BaseBdev2", 00:15:45.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.673 "is_configured": false, 00:15:45.673 "data_offset": 0, 00:15:45.673 "data_size": 0 00:15:45.673 }, 00:15:45.673 { 00:15:45.673 "name": "BaseBdev3", 00:15:45.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.673 "is_configured": false, 00:15:45.673 "data_offset": 0, 00:15:45.673 "data_size": 0 00:15:45.673 } 00:15:45.673 ] 00:15:45.673 }' 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.673 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 [2024-12-09 22:57:01.855240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.243 [2024-12-09 22:57:01.855352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 [2024-12-09 22:57:01.867216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.243 [2024-12-09 22:57:01.867315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.243 [2024-12-09 22:57:01.867347] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.243 [2024-12-09 22:57:01.867375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.243 [2024-12-09 22:57:01.867396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.243 [2024-12-09 22:57:01.867421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 [2024-12-09 22:57:01.928274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.243 BaseBdev1 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 [ 00:15:46.243 { 00:15:46.243 "name": "BaseBdev1", 00:15:46.243 "aliases": [ 00:15:46.243 "02b03a3b-4c7b-41e3-916b-91d3e48801dc" 00:15:46.243 ], 00:15:46.243 "product_name": "Malloc disk", 00:15:46.243 "block_size": 512, 00:15:46.243 "num_blocks": 65536, 00:15:46.243 "uuid": "02b03a3b-4c7b-41e3-916b-91d3e48801dc", 00:15:46.243 "assigned_rate_limits": { 00:15:46.243 "rw_ios_per_sec": 0, 00:15:46.243 "rw_mbytes_per_sec": 0, 00:15:46.243 "r_mbytes_per_sec": 0, 00:15:46.243 "w_mbytes_per_sec": 0 00:15:46.243 }, 00:15:46.243 "claimed": true, 00:15:46.243 "claim_type": "exclusive_write", 00:15:46.243 "zoned": false, 00:15:46.243 "supported_io_types": { 00:15:46.243 "read": true, 00:15:46.243 "write": true, 00:15:46.243 "unmap": true, 00:15:46.243 "flush": true, 00:15:46.243 "reset": true, 00:15:46.243 "nvme_admin": false, 00:15:46.243 "nvme_io": false, 00:15:46.243 "nvme_io_md": false, 00:15:46.243 "write_zeroes": true, 00:15:46.243 "zcopy": true, 00:15:46.243 "get_zone_info": false, 00:15:46.243 "zone_management": false, 00:15:46.243 "zone_append": false, 00:15:46.243 "compare": false, 00:15:46.243 "compare_and_write": false, 00:15:46.243 "abort": true, 00:15:46.243 "seek_hole": false, 00:15:46.243 "seek_data": false, 00:15:46.243 "copy": true, 00:15:46.243 "nvme_iov_md": false 00:15:46.243 }, 00:15:46.243 "memory_domains": [ 00:15:46.243 { 00:15:46.243 "dma_device_id": "system", 00:15:46.243 "dma_device_type": 1 00:15:46.243 }, 00:15:46.243 { 00:15:46.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.243 "dma_device_type": 2 00:15:46.243 } 00:15:46.243 ], 00:15:46.243 "driver_specific": {} 00:15:46.243 } 00:15:46.243 ] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.243 22:57:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.243 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.243 "name": "Existed_Raid", 00:15:46.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.243 "strip_size_kb": 64, 00:15:46.243 "state": "configuring", 00:15:46.243 "raid_level": "concat", 00:15:46.243 "superblock": false, 00:15:46.243 "num_base_bdevs": 3, 00:15:46.244 "num_base_bdevs_discovered": 1, 00:15:46.244 "num_base_bdevs_operational": 3, 00:15:46.244 "base_bdevs_list": [ 00:15:46.244 { 00:15:46.244 "name": "BaseBdev1", 00:15:46.244 "uuid": "02b03a3b-4c7b-41e3-916b-91d3e48801dc", 00:15:46.244 "is_configured": true, 00:15:46.244 "data_offset": 0, 00:15:46.244 "data_size": 65536 00:15:46.244 }, 00:15:46.244 { 00:15:46.244 "name": "BaseBdev2", 00:15:46.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.244 "is_configured": false, 00:15:46.244 "data_offset": 0, 00:15:46.244 "data_size": 0 00:15:46.244 }, 00:15:46.244 { 00:15:46.244 "name": "BaseBdev3", 00:15:46.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.244 "is_configured": false, 00:15:46.244 "data_offset": 0, 00:15:46.244 "data_size": 0 00:15:46.244 } 00:15:46.244 ] 00:15:46.244 }' 00:15:46.244 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.244 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.814 [2024-12-09 22:57:02.419556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.814 [2024-12-09 22:57:02.419689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.814 [2024-12-09 22:57:02.431612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.814 [2024-12-09 22:57:02.434153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.814 [2024-12-09 22:57:02.434265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.814 [2024-12-09 22:57:02.434283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.814 [2024-12-09 22:57:02.434294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.814 "name": "Existed_Raid", 00:15:46.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.814 "strip_size_kb": 64, 00:15:46.814 "state": "configuring", 00:15:46.814 "raid_level": "concat", 00:15:46.814 "superblock": false, 00:15:46.814 "num_base_bdevs": 3, 00:15:46.814 "num_base_bdevs_discovered": 1, 00:15:46.814 "num_base_bdevs_operational": 3, 00:15:46.814 "base_bdevs_list": [ 00:15:46.814 { 00:15:46.814 "name": "BaseBdev1", 00:15:46.814 "uuid": "02b03a3b-4c7b-41e3-916b-91d3e48801dc", 00:15:46.814 "is_configured": true, 00:15:46.814 "data_offset": 0, 00:15:46.814 "data_size": 65536 00:15:46.814 }, 00:15:46.814 { 00:15:46.814 "name": "BaseBdev2", 00:15:46.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.814 "is_configured": false, 00:15:46.814 "data_offset": 0, 00:15:46.814 "data_size": 0 00:15:46.814 }, 00:15:46.814 { 00:15:46.814 "name": "BaseBdev3", 00:15:46.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.814 "is_configured": false, 00:15:46.814 "data_offset": 0, 00:15:46.814 "data_size": 0 00:15:46.814 } 00:15:46.814 ] 00:15:46.814 }' 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.814 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.073 [2024-12-09 22:57:02.924891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.073 BaseBdev2 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.073 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.332 [ 00:15:47.332 { 00:15:47.332 "name": "BaseBdev2", 00:15:47.332 "aliases": [ 00:15:47.332 "942189f7-cff9-4cb9-856f-acf8dfd9e266" 00:15:47.332 ], 00:15:47.332 "product_name": "Malloc disk", 00:15:47.332 "block_size": 512, 00:15:47.332 "num_blocks": 65536, 00:15:47.332 "uuid": "942189f7-cff9-4cb9-856f-acf8dfd9e266", 00:15:47.332 "assigned_rate_limits": { 00:15:47.332 "rw_ios_per_sec": 0, 00:15:47.332 "rw_mbytes_per_sec": 0, 00:15:47.332 "r_mbytes_per_sec": 0, 00:15:47.332 "w_mbytes_per_sec": 0 00:15:47.332 }, 00:15:47.332 "claimed": true, 00:15:47.332 "claim_type": "exclusive_write", 00:15:47.332 "zoned": false, 00:15:47.332 "supported_io_types": { 00:15:47.332 "read": true, 00:15:47.332 "write": true, 00:15:47.332 "unmap": true, 00:15:47.332 "flush": true, 00:15:47.332 "reset": true, 00:15:47.332 "nvme_admin": false, 00:15:47.332 "nvme_io": false, 00:15:47.332 "nvme_io_md": false, 00:15:47.332 "write_zeroes": true, 00:15:47.332 "zcopy": true, 00:15:47.332 "get_zone_info": false, 00:15:47.332 "zone_management": false, 00:15:47.332 "zone_append": false, 00:15:47.332 "compare": false, 00:15:47.332 "compare_and_write": false, 00:15:47.332 "abort": true, 00:15:47.332 "seek_hole": false, 00:15:47.332 "seek_data": false, 00:15:47.332 "copy": true, 00:15:47.332 "nvme_iov_md": false 00:15:47.332 }, 00:15:47.332 "memory_domains": [ 00:15:47.332 { 00:15:47.332 "dma_device_id": "system", 00:15:47.332 "dma_device_type": 1 00:15:47.332 }, 00:15:47.332 { 00:15:47.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.332 "dma_device_type": 2 00:15:47.332 } 00:15:47.332 ], 00:15:47.332 "driver_specific": {} 00:15:47.332 } 00:15:47.332 ] 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.332 22:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.333 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.333 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.333 22:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.333 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.333 "name": "Existed_Raid", 00:15:47.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.333 "strip_size_kb": 64, 00:15:47.333 "state": "configuring", 00:15:47.333 "raid_level": "concat", 00:15:47.333 "superblock": false, 00:15:47.333 "num_base_bdevs": 3, 00:15:47.333 "num_base_bdevs_discovered": 2, 00:15:47.333 "num_base_bdevs_operational": 3, 00:15:47.333 "base_bdevs_list": [ 00:15:47.333 { 00:15:47.333 "name": "BaseBdev1", 00:15:47.333 "uuid": "02b03a3b-4c7b-41e3-916b-91d3e48801dc", 00:15:47.333 "is_configured": true, 00:15:47.333 "data_offset": 0, 00:15:47.333 "data_size": 65536 00:15:47.333 }, 00:15:47.333 { 00:15:47.333 "name": "BaseBdev2", 00:15:47.333 "uuid": "942189f7-cff9-4cb9-856f-acf8dfd9e266", 00:15:47.333 "is_configured": true, 00:15:47.333 "data_offset": 0, 00:15:47.333 "data_size": 65536 00:15:47.333 }, 00:15:47.333 { 00:15:47.333 "name": "BaseBdev3", 00:15:47.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.333 "is_configured": false, 00:15:47.333 "data_offset": 0, 00:15:47.333 "data_size": 0 00:15:47.333 } 00:15:47.333 ] 00:15:47.333 }' 00:15:47.333 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.333 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.901 [2024-12-09 22:57:03.531132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.901 [2024-12-09 22:57:03.531304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.901 [2024-12-09 22:57:03.531324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:47.901 [2024-12-09 22:57:03.531659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:47.901 [2024-12-09 22:57:03.531856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.901 [2024-12-09 22:57:03.531868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.901 [2024-12-09 22:57:03.532190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.901 BaseBdev3 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.901 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.901 [ 00:15:47.901 { 00:15:47.901 "name": "BaseBdev3", 00:15:47.901 "aliases": [ 00:15:47.901 "9ad6dc5a-db98-427e-9aca-e6f0b5a521cc" 00:15:47.901 ], 00:15:47.901 "product_name": "Malloc disk", 00:15:47.901 "block_size": 512, 00:15:47.901 "num_blocks": 65536, 00:15:47.901 "uuid": "9ad6dc5a-db98-427e-9aca-e6f0b5a521cc", 00:15:47.901 "assigned_rate_limits": { 00:15:47.901 "rw_ios_per_sec": 0, 00:15:47.901 "rw_mbytes_per_sec": 0, 00:15:47.901 "r_mbytes_per_sec": 0, 00:15:47.901 "w_mbytes_per_sec": 0 00:15:47.901 }, 00:15:47.901 "claimed": true, 00:15:47.901 "claim_type": "exclusive_write", 00:15:47.901 "zoned": false, 00:15:47.901 "supported_io_types": { 00:15:47.901 "read": true, 00:15:47.901 "write": true, 00:15:47.901 "unmap": true, 00:15:47.901 "flush": true, 00:15:47.901 "reset": true, 00:15:47.901 "nvme_admin": false, 00:15:47.901 "nvme_io": false, 00:15:47.901 "nvme_io_md": false, 00:15:47.901 "write_zeroes": true, 00:15:47.901 "zcopy": true, 00:15:47.901 "get_zone_info": false, 00:15:47.901 "zone_management": false, 00:15:47.901 "zone_append": false, 00:15:47.901 "compare": false, 00:15:47.901 "compare_and_write": false, 00:15:47.901 "abort": true, 00:15:47.901 "seek_hole": false, 00:15:47.901 "seek_data": false, 00:15:47.901 "copy": true, 00:15:47.901 "nvme_iov_md": false 00:15:47.901 }, 00:15:47.901 "memory_domains": [ 00:15:47.901 { 00:15:47.901 "dma_device_id": "system", 00:15:47.901 "dma_device_type": 1 00:15:47.901 }, 00:15:47.901 { 00:15:47.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.902 "dma_device_type": 2 00:15:47.902 } 00:15:47.902 ], 00:15:47.902 "driver_specific": {} 00:15:47.902 } 00:15:47.902 ] 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.902 "name": "Existed_Raid", 00:15:47.902 "uuid": "2f5a2531-5c68-4f7e-83b3-32e203b5b1cc", 00:15:47.902 "strip_size_kb": 64, 00:15:47.902 "state": "online", 00:15:47.902 "raid_level": "concat", 00:15:47.902 "superblock": false, 00:15:47.902 "num_base_bdevs": 3, 00:15:47.902 "num_base_bdevs_discovered": 3, 00:15:47.902 "num_base_bdevs_operational": 3, 00:15:47.902 "base_bdevs_list": [ 00:15:47.902 { 00:15:47.902 "name": "BaseBdev1", 00:15:47.902 "uuid": "02b03a3b-4c7b-41e3-916b-91d3e48801dc", 00:15:47.902 "is_configured": true, 00:15:47.902 "data_offset": 0, 00:15:47.902 "data_size": 65536 00:15:47.902 }, 00:15:47.902 { 00:15:47.902 "name": "BaseBdev2", 00:15:47.902 "uuid": "942189f7-cff9-4cb9-856f-acf8dfd9e266", 00:15:47.902 "is_configured": true, 00:15:47.902 "data_offset": 0, 00:15:47.902 "data_size": 65536 00:15:47.902 }, 00:15:47.902 { 00:15:47.902 "name": "BaseBdev3", 00:15:47.902 "uuid": "9ad6dc5a-db98-427e-9aca-e6f0b5a521cc", 00:15:47.902 "is_configured": true, 00:15:47.902 "data_offset": 0, 00:15:47.902 "data_size": 65536 00:15:47.902 } 00:15:47.902 ] 00:15:47.902 }' 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.902 22:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.471 [2024-12-09 22:57:04.070712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.471 "name": "Existed_Raid", 00:15:48.471 "aliases": [ 00:15:48.471 "2f5a2531-5c68-4f7e-83b3-32e203b5b1cc" 00:15:48.471 ], 00:15:48.471 "product_name": "Raid Volume", 00:15:48.471 "block_size": 512, 00:15:48.471 "num_blocks": 196608, 00:15:48.471 "uuid": "2f5a2531-5c68-4f7e-83b3-32e203b5b1cc", 00:15:48.471 "assigned_rate_limits": { 00:15:48.471 "rw_ios_per_sec": 0, 00:15:48.471 "rw_mbytes_per_sec": 0, 00:15:48.471 "r_mbytes_per_sec": 0, 00:15:48.471 "w_mbytes_per_sec": 0 00:15:48.471 }, 00:15:48.471 "claimed": false, 00:15:48.471 "zoned": false, 00:15:48.471 "supported_io_types": { 00:15:48.471 "read": true, 00:15:48.471 "write": true, 00:15:48.471 "unmap": true, 00:15:48.471 "flush": true, 00:15:48.471 "reset": true, 00:15:48.471 "nvme_admin": false, 00:15:48.471 "nvme_io": false, 00:15:48.471 "nvme_io_md": false, 00:15:48.471 "write_zeroes": true, 00:15:48.471 "zcopy": false, 00:15:48.471 "get_zone_info": false, 00:15:48.471 "zone_management": false, 00:15:48.471 "zone_append": false, 00:15:48.471 "compare": false, 00:15:48.471 "compare_and_write": false, 00:15:48.471 "abort": false, 00:15:48.471 "seek_hole": false, 00:15:48.471 "seek_data": false, 00:15:48.471 "copy": false, 00:15:48.471 "nvme_iov_md": false 00:15:48.471 }, 00:15:48.471 "memory_domains": [ 00:15:48.471 { 00:15:48.471 "dma_device_id": "system", 00:15:48.471 "dma_device_type": 1 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.471 "dma_device_type": 2 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "dma_device_id": "system", 00:15:48.471 "dma_device_type": 1 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.471 "dma_device_type": 2 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "dma_device_id": "system", 00:15:48.471 "dma_device_type": 1 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.471 "dma_device_type": 2 00:15:48.471 } 00:15:48.471 ], 00:15:48.471 "driver_specific": { 00:15:48.471 "raid": { 00:15:48.471 "uuid": "2f5a2531-5c68-4f7e-83b3-32e203b5b1cc", 00:15:48.471 "strip_size_kb": 64, 00:15:48.471 "state": "online", 00:15:48.471 "raid_level": "concat", 00:15:48.471 "superblock": false, 00:15:48.471 "num_base_bdevs": 3, 00:15:48.471 "num_base_bdevs_discovered": 3, 00:15:48.471 "num_base_bdevs_operational": 3, 00:15:48.471 "base_bdevs_list": [ 00:15:48.471 { 00:15:48.471 "name": "BaseBdev1", 00:15:48.471 "uuid": "02b03a3b-4c7b-41e3-916b-91d3e48801dc", 00:15:48.471 "is_configured": true, 00:15:48.471 "data_offset": 0, 00:15:48.471 "data_size": 65536 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "name": "BaseBdev2", 00:15:48.471 "uuid": "942189f7-cff9-4cb9-856f-acf8dfd9e266", 00:15:48.471 "is_configured": true, 00:15:48.471 "data_offset": 0, 00:15:48.471 "data_size": 65536 00:15:48.471 }, 00:15:48.471 { 00:15:48.471 "name": "BaseBdev3", 00:15:48.471 "uuid": "9ad6dc5a-db98-427e-9aca-e6f0b5a521cc", 00:15:48.471 "is_configured": true, 00:15:48.471 "data_offset": 0, 00:15:48.471 "data_size": 65536 00:15:48.471 } 00:15:48.471 ] 00:15:48.471 } 00:15:48.471 } 00:15:48.471 }' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:48.471 BaseBdev2 00:15:48.471 BaseBdev3' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.471 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.730 [2024-12-09 22:57:04.373895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.730 [2024-12-09 22:57:04.373976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.730 [2024-12-09 22:57:04.374080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.730 "name": "Existed_Raid", 00:15:48.730 "uuid": "2f5a2531-5c68-4f7e-83b3-32e203b5b1cc", 00:15:48.730 "strip_size_kb": 64, 00:15:48.730 "state": "offline", 00:15:48.730 "raid_level": "concat", 00:15:48.730 "superblock": false, 00:15:48.730 "num_base_bdevs": 3, 00:15:48.730 "num_base_bdevs_discovered": 2, 00:15:48.730 "num_base_bdevs_operational": 2, 00:15:48.730 "base_bdevs_list": [ 00:15:48.730 { 00:15:48.730 "name": null, 00:15:48.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.730 "is_configured": false, 00:15:48.730 "data_offset": 0, 00:15:48.730 "data_size": 65536 00:15:48.730 }, 00:15:48.730 { 00:15:48.730 "name": "BaseBdev2", 00:15:48.730 "uuid": "942189f7-cff9-4cb9-856f-acf8dfd9e266", 00:15:48.730 "is_configured": true, 00:15:48.730 "data_offset": 0, 00:15:48.730 "data_size": 65536 00:15:48.730 }, 00:15:48.730 { 00:15:48.730 "name": "BaseBdev3", 00:15:48.730 "uuid": "9ad6dc5a-db98-427e-9aca-e6f0b5a521cc", 00:15:48.730 "is_configured": true, 00:15:48.730 "data_offset": 0, 00:15:48.730 "data_size": 65536 00:15:48.730 } 00:15:48.730 ] 00:15:48.730 }' 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.730 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.297 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:49.297 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.297 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.297 22:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.297 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.297 22:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.297 [2024-12-09 22:57:05.024086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.297 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 [2024-12-09 22:57:05.208966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.557 [2024-12-09 22:57:05.209103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 BaseBdev2 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 [ 00:15:49.816 { 00:15:49.816 "name": "BaseBdev2", 00:15:49.816 "aliases": [ 00:15:49.816 "84e162c2-3c81-4a02-aeaf-777b182d0eff" 00:15:49.816 ], 00:15:49.816 "product_name": "Malloc disk", 00:15:49.816 "block_size": 512, 00:15:49.816 "num_blocks": 65536, 00:15:49.816 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:49.816 "assigned_rate_limits": { 00:15:49.816 "rw_ios_per_sec": 0, 00:15:49.816 "rw_mbytes_per_sec": 0, 00:15:49.816 "r_mbytes_per_sec": 0, 00:15:49.816 "w_mbytes_per_sec": 0 00:15:49.816 }, 00:15:49.816 "claimed": false, 00:15:49.816 "zoned": false, 00:15:49.816 "supported_io_types": { 00:15:49.816 "read": true, 00:15:49.816 "write": true, 00:15:49.816 "unmap": true, 00:15:49.816 "flush": true, 00:15:49.816 "reset": true, 00:15:49.816 "nvme_admin": false, 00:15:49.816 "nvme_io": false, 00:15:49.816 "nvme_io_md": false, 00:15:49.816 "write_zeroes": true, 00:15:49.816 "zcopy": true, 00:15:49.816 "get_zone_info": false, 00:15:49.816 "zone_management": false, 00:15:49.816 "zone_append": false, 00:15:49.816 "compare": false, 00:15:49.816 "compare_and_write": false, 00:15:49.816 "abort": true, 00:15:49.816 "seek_hole": false, 00:15:49.816 "seek_data": false, 00:15:49.816 "copy": true, 00:15:49.816 "nvme_iov_md": false 00:15:49.816 }, 00:15:49.816 "memory_domains": [ 00:15:49.816 { 00:15:49.816 "dma_device_id": "system", 00:15:49.816 "dma_device_type": 1 00:15:49.816 }, 00:15:49.816 { 00:15:49.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.816 "dma_device_type": 2 00:15:49.816 } 00:15:49.816 ], 00:15:49.816 "driver_specific": {} 00:15:49.816 } 00:15:49.816 ] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 BaseBdev3 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 [ 00:15:49.816 { 00:15:49.816 "name": "BaseBdev3", 00:15:49.816 "aliases": [ 00:15:49.816 "5238e5b7-8871-440d-948a-707889745162" 00:15:49.816 ], 00:15:49.816 "product_name": "Malloc disk", 00:15:49.816 "block_size": 512, 00:15:49.816 "num_blocks": 65536, 00:15:49.816 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:49.816 "assigned_rate_limits": { 00:15:49.816 "rw_ios_per_sec": 0, 00:15:49.816 "rw_mbytes_per_sec": 0, 00:15:49.816 "r_mbytes_per_sec": 0, 00:15:49.816 "w_mbytes_per_sec": 0 00:15:49.816 }, 00:15:49.816 "claimed": false, 00:15:49.816 "zoned": false, 00:15:49.816 "supported_io_types": { 00:15:49.816 "read": true, 00:15:49.816 "write": true, 00:15:49.816 "unmap": true, 00:15:49.816 "flush": true, 00:15:49.816 "reset": true, 00:15:49.816 "nvme_admin": false, 00:15:49.816 "nvme_io": false, 00:15:49.816 "nvme_io_md": false, 00:15:49.816 "write_zeroes": true, 00:15:49.816 "zcopy": true, 00:15:49.816 "get_zone_info": false, 00:15:49.816 "zone_management": false, 00:15:49.816 "zone_append": false, 00:15:49.816 "compare": false, 00:15:49.816 "compare_and_write": false, 00:15:49.816 "abort": true, 00:15:49.816 "seek_hole": false, 00:15:49.816 "seek_data": false, 00:15:49.816 "copy": true, 00:15:49.816 "nvme_iov_md": false 00:15:49.816 }, 00:15:49.816 "memory_domains": [ 00:15:49.816 { 00:15:49.816 "dma_device_id": "system", 00:15:49.816 "dma_device_type": 1 00:15:49.816 }, 00:15:49.816 { 00:15:49.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.816 "dma_device_type": 2 00:15:49.816 } 00:15:49.816 ], 00:15:49.816 "driver_specific": {} 00:15:49.816 } 00:15:49.816 ] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 [2024-12-09 22:57:05.577900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.816 [2024-12-09 22:57:05.578008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.816 [2024-12-09 22:57:05.578061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.816 [2024-12-09 22:57:05.580472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.816 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.816 "name": "Existed_Raid", 00:15:49.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.816 "strip_size_kb": 64, 00:15:49.816 "state": "configuring", 00:15:49.816 "raid_level": "concat", 00:15:49.816 "superblock": false, 00:15:49.816 "num_base_bdevs": 3, 00:15:49.816 "num_base_bdevs_discovered": 2, 00:15:49.817 "num_base_bdevs_operational": 3, 00:15:49.817 "base_bdevs_list": [ 00:15:49.817 { 00:15:49.817 "name": "BaseBdev1", 00:15:49.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.817 "is_configured": false, 00:15:49.817 "data_offset": 0, 00:15:49.817 "data_size": 0 00:15:49.817 }, 00:15:49.817 { 00:15:49.817 "name": "BaseBdev2", 00:15:49.817 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:49.817 "is_configured": true, 00:15:49.817 "data_offset": 0, 00:15:49.817 "data_size": 65536 00:15:49.817 }, 00:15:49.817 { 00:15:49.817 "name": "BaseBdev3", 00:15:49.817 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:49.817 "is_configured": true, 00:15:49.817 "data_offset": 0, 00:15:49.817 "data_size": 65536 00:15:49.817 } 00:15:49.817 ] 00:15:49.817 }' 00:15:49.817 22:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.817 22:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.386 [2024-12-09 22:57:06.057205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.386 "name": "Existed_Raid", 00:15:50.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.386 "strip_size_kb": 64, 00:15:50.386 "state": "configuring", 00:15:50.386 "raid_level": "concat", 00:15:50.386 "superblock": false, 00:15:50.386 "num_base_bdevs": 3, 00:15:50.386 "num_base_bdevs_discovered": 1, 00:15:50.386 "num_base_bdevs_operational": 3, 00:15:50.386 "base_bdevs_list": [ 00:15:50.386 { 00:15:50.386 "name": "BaseBdev1", 00:15:50.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.386 "is_configured": false, 00:15:50.386 "data_offset": 0, 00:15:50.386 "data_size": 0 00:15:50.386 }, 00:15:50.386 { 00:15:50.386 "name": null, 00:15:50.386 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:50.386 "is_configured": false, 00:15:50.386 "data_offset": 0, 00:15:50.386 "data_size": 65536 00:15:50.386 }, 00:15:50.386 { 00:15:50.386 "name": "BaseBdev3", 00:15:50.386 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:50.386 "is_configured": true, 00:15:50.386 "data_offset": 0, 00:15:50.386 "data_size": 65536 00:15:50.386 } 00:15:50.386 ] 00:15:50.386 }' 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.386 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.955 [2024-12-09 22:57:06.599029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.955 BaseBdev1 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.955 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.955 [ 00:15:50.955 { 00:15:50.955 "name": "BaseBdev1", 00:15:50.955 "aliases": [ 00:15:50.955 "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126" 00:15:50.955 ], 00:15:50.955 "product_name": "Malloc disk", 00:15:50.955 "block_size": 512, 00:15:50.955 "num_blocks": 65536, 00:15:50.955 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:50.955 "assigned_rate_limits": { 00:15:50.955 "rw_ios_per_sec": 0, 00:15:50.955 "rw_mbytes_per_sec": 0, 00:15:50.955 "r_mbytes_per_sec": 0, 00:15:50.955 "w_mbytes_per_sec": 0 00:15:50.955 }, 00:15:50.955 "claimed": true, 00:15:50.955 "claim_type": "exclusive_write", 00:15:50.955 "zoned": false, 00:15:50.955 "supported_io_types": { 00:15:50.955 "read": true, 00:15:50.956 "write": true, 00:15:50.956 "unmap": true, 00:15:50.956 "flush": true, 00:15:50.956 "reset": true, 00:15:50.956 "nvme_admin": false, 00:15:50.956 "nvme_io": false, 00:15:50.956 "nvme_io_md": false, 00:15:50.956 "write_zeroes": true, 00:15:50.956 "zcopy": true, 00:15:50.956 "get_zone_info": false, 00:15:50.956 "zone_management": false, 00:15:50.956 "zone_append": false, 00:15:50.956 "compare": false, 00:15:50.956 "compare_and_write": false, 00:15:50.956 "abort": true, 00:15:50.956 "seek_hole": false, 00:15:50.956 "seek_data": false, 00:15:50.956 "copy": true, 00:15:50.956 "nvme_iov_md": false 00:15:50.956 }, 00:15:50.956 "memory_domains": [ 00:15:50.956 { 00:15:50.956 "dma_device_id": "system", 00:15:50.956 "dma_device_type": 1 00:15:50.956 }, 00:15:50.956 { 00:15:50.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.956 "dma_device_type": 2 00:15:50.956 } 00:15:50.956 ], 00:15:50.956 "driver_specific": {} 00:15:50.956 } 00:15:50.956 ] 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.956 "name": "Existed_Raid", 00:15:50.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.956 "strip_size_kb": 64, 00:15:50.956 "state": "configuring", 00:15:50.956 "raid_level": "concat", 00:15:50.956 "superblock": false, 00:15:50.956 "num_base_bdevs": 3, 00:15:50.956 "num_base_bdevs_discovered": 2, 00:15:50.956 "num_base_bdevs_operational": 3, 00:15:50.956 "base_bdevs_list": [ 00:15:50.956 { 00:15:50.956 "name": "BaseBdev1", 00:15:50.956 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:50.956 "is_configured": true, 00:15:50.956 "data_offset": 0, 00:15:50.956 "data_size": 65536 00:15:50.956 }, 00:15:50.956 { 00:15:50.956 "name": null, 00:15:50.956 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:50.956 "is_configured": false, 00:15:50.956 "data_offset": 0, 00:15:50.956 "data_size": 65536 00:15:50.956 }, 00:15:50.956 { 00:15:50.956 "name": "BaseBdev3", 00:15:50.956 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:50.956 "is_configured": true, 00:15:50.956 "data_offset": 0, 00:15:50.956 "data_size": 65536 00:15:50.956 } 00:15:50.956 ] 00:15:50.956 }' 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.956 22:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.524 [2024-12-09 22:57:07.154199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.524 "name": "Existed_Raid", 00:15:51.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.524 "strip_size_kb": 64, 00:15:51.524 "state": "configuring", 00:15:51.524 "raid_level": "concat", 00:15:51.524 "superblock": false, 00:15:51.524 "num_base_bdevs": 3, 00:15:51.524 "num_base_bdevs_discovered": 1, 00:15:51.524 "num_base_bdevs_operational": 3, 00:15:51.524 "base_bdevs_list": [ 00:15:51.524 { 00:15:51.524 "name": "BaseBdev1", 00:15:51.524 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:51.524 "is_configured": true, 00:15:51.524 "data_offset": 0, 00:15:51.524 "data_size": 65536 00:15:51.524 }, 00:15:51.524 { 00:15:51.524 "name": null, 00:15:51.524 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:51.524 "is_configured": false, 00:15:51.524 "data_offset": 0, 00:15:51.524 "data_size": 65536 00:15:51.524 }, 00:15:51.524 { 00:15:51.524 "name": null, 00:15:51.524 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:51.524 "is_configured": false, 00:15:51.524 "data_offset": 0, 00:15:51.524 "data_size": 65536 00:15:51.524 } 00:15:51.524 ] 00:15:51.524 }' 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.524 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.783 [2024-12-09 22:57:07.629513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.783 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.043 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.043 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.044 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.044 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.044 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.044 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.044 "name": "Existed_Raid", 00:15:52.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.044 "strip_size_kb": 64, 00:15:52.044 "state": "configuring", 00:15:52.044 "raid_level": "concat", 00:15:52.044 "superblock": false, 00:15:52.044 "num_base_bdevs": 3, 00:15:52.044 "num_base_bdevs_discovered": 2, 00:15:52.044 "num_base_bdevs_operational": 3, 00:15:52.044 "base_bdevs_list": [ 00:15:52.044 { 00:15:52.044 "name": "BaseBdev1", 00:15:52.044 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:52.044 "is_configured": true, 00:15:52.044 "data_offset": 0, 00:15:52.044 "data_size": 65536 00:15:52.044 }, 00:15:52.044 { 00:15:52.044 "name": null, 00:15:52.044 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:52.044 "is_configured": false, 00:15:52.044 "data_offset": 0, 00:15:52.044 "data_size": 65536 00:15:52.044 }, 00:15:52.044 { 00:15:52.044 "name": "BaseBdev3", 00:15:52.044 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:52.044 "is_configured": true, 00:15:52.044 "data_offset": 0, 00:15:52.044 "data_size": 65536 00:15:52.044 } 00:15:52.044 ] 00:15:52.044 }' 00:15:52.044 22:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.044 22:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.336 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.336 [2024-12-09 22:57:08.124709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.596 "name": "Existed_Raid", 00:15:52.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.596 "strip_size_kb": 64, 00:15:52.596 "state": "configuring", 00:15:52.596 "raid_level": "concat", 00:15:52.596 "superblock": false, 00:15:52.596 "num_base_bdevs": 3, 00:15:52.596 "num_base_bdevs_discovered": 1, 00:15:52.596 "num_base_bdevs_operational": 3, 00:15:52.596 "base_bdevs_list": [ 00:15:52.596 { 00:15:52.596 "name": null, 00:15:52.596 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:52.596 "is_configured": false, 00:15:52.596 "data_offset": 0, 00:15:52.596 "data_size": 65536 00:15:52.596 }, 00:15:52.596 { 00:15:52.596 "name": null, 00:15:52.596 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:52.596 "is_configured": false, 00:15:52.596 "data_offset": 0, 00:15:52.596 "data_size": 65536 00:15:52.596 }, 00:15:52.596 { 00:15:52.596 "name": "BaseBdev3", 00:15:52.596 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:52.596 "is_configured": true, 00:15:52.596 "data_offset": 0, 00:15:52.596 "data_size": 65536 00:15:52.596 } 00:15:52.596 ] 00:15:52.596 }' 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.596 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.856 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:52.856 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.856 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.856 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.856 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.117 [2024-12-09 22:57:08.722871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.117 "name": "Existed_Raid", 00:15:53.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.117 "strip_size_kb": 64, 00:15:53.117 "state": "configuring", 00:15:53.117 "raid_level": "concat", 00:15:53.117 "superblock": false, 00:15:53.117 "num_base_bdevs": 3, 00:15:53.117 "num_base_bdevs_discovered": 2, 00:15:53.117 "num_base_bdevs_operational": 3, 00:15:53.117 "base_bdevs_list": [ 00:15:53.117 { 00:15:53.117 "name": null, 00:15:53.117 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:53.117 "is_configured": false, 00:15:53.117 "data_offset": 0, 00:15:53.117 "data_size": 65536 00:15:53.117 }, 00:15:53.117 { 00:15:53.117 "name": "BaseBdev2", 00:15:53.117 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:53.117 "is_configured": true, 00:15:53.117 "data_offset": 0, 00:15:53.117 "data_size": 65536 00:15:53.117 }, 00:15:53.117 { 00:15:53.117 "name": "BaseBdev3", 00:15:53.117 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:53.117 "is_configured": true, 00:15:53.117 "data_offset": 0, 00:15:53.117 "data_size": 65536 00:15:53.117 } 00:15:53.117 ] 00:15:53.117 }' 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.117 22:57:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.377 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.377 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.377 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.377 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.638 [2024-12-09 22:57:09.367656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:53.638 [2024-12-09 22:57:09.367733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:53.638 [2024-12-09 22:57:09.367745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:53.638 [2024-12-09 22:57:09.368056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:53.638 [2024-12-09 22:57:09.368253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:53.638 [2024-12-09 22:57:09.368265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:53.638 [2024-12-09 22:57:09.368671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.638 NewBaseBdev 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.638 [ 00:15:53.638 { 00:15:53.638 "name": "NewBaseBdev", 00:15:53.638 "aliases": [ 00:15:53.638 "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126" 00:15:53.638 ], 00:15:53.638 "product_name": "Malloc disk", 00:15:53.638 "block_size": 512, 00:15:53.638 "num_blocks": 65536, 00:15:53.638 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:53.638 "assigned_rate_limits": { 00:15:53.638 "rw_ios_per_sec": 0, 00:15:53.638 "rw_mbytes_per_sec": 0, 00:15:53.638 "r_mbytes_per_sec": 0, 00:15:53.638 "w_mbytes_per_sec": 0 00:15:53.638 }, 00:15:53.638 "claimed": true, 00:15:53.638 "claim_type": "exclusive_write", 00:15:53.638 "zoned": false, 00:15:53.638 "supported_io_types": { 00:15:53.638 "read": true, 00:15:53.638 "write": true, 00:15:53.638 "unmap": true, 00:15:53.638 "flush": true, 00:15:53.638 "reset": true, 00:15:53.638 "nvme_admin": false, 00:15:53.638 "nvme_io": false, 00:15:53.638 "nvme_io_md": false, 00:15:53.638 "write_zeroes": true, 00:15:53.638 "zcopy": true, 00:15:53.638 "get_zone_info": false, 00:15:53.638 "zone_management": false, 00:15:53.638 "zone_append": false, 00:15:53.638 "compare": false, 00:15:53.638 "compare_and_write": false, 00:15:53.638 "abort": true, 00:15:53.638 "seek_hole": false, 00:15:53.638 "seek_data": false, 00:15:53.638 "copy": true, 00:15:53.638 "nvme_iov_md": false 00:15:53.638 }, 00:15:53.638 "memory_domains": [ 00:15:53.638 { 00:15:53.638 "dma_device_id": "system", 00:15:53.638 "dma_device_type": 1 00:15:53.638 }, 00:15:53.638 { 00:15:53.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.638 "dma_device_type": 2 00:15:53.638 } 00:15:53.638 ], 00:15:53.638 "driver_specific": {} 00:15:53.638 } 00:15:53.638 ] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.638 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.638 "name": "Existed_Raid", 00:15:53.638 "uuid": "9981b5da-a8a7-40ea-8bb7-9e54a4a92ee1", 00:15:53.638 "strip_size_kb": 64, 00:15:53.638 "state": "online", 00:15:53.638 "raid_level": "concat", 00:15:53.638 "superblock": false, 00:15:53.638 "num_base_bdevs": 3, 00:15:53.638 "num_base_bdevs_discovered": 3, 00:15:53.638 "num_base_bdevs_operational": 3, 00:15:53.638 "base_bdevs_list": [ 00:15:53.638 { 00:15:53.638 "name": "NewBaseBdev", 00:15:53.639 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:53.639 "is_configured": true, 00:15:53.639 "data_offset": 0, 00:15:53.639 "data_size": 65536 00:15:53.639 }, 00:15:53.639 { 00:15:53.639 "name": "BaseBdev2", 00:15:53.639 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:53.639 "is_configured": true, 00:15:53.639 "data_offset": 0, 00:15:53.639 "data_size": 65536 00:15:53.639 }, 00:15:53.639 { 00:15:53.639 "name": "BaseBdev3", 00:15:53.639 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:53.639 "is_configured": true, 00:15:53.639 "data_offset": 0, 00:15:53.639 "data_size": 65536 00:15:53.639 } 00:15:53.639 ] 00:15:53.639 }' 00:15:53.639 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.639 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 [2024-12-09 22:57:09.847274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.208 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.208 "name": "Existed_Raid", 00:15:54.208 "aliases": [ 00:15:54.208 "9981b5da-a8a7-40ea-8bb7-9e54a4a92ee1" 00:15:54.208 ], 00:15:54.208 "product_name": "Raid Volume", 00:15:54.208 "block_size": 512, 00:15:54.208 "num_blocks": 196608, 00:15:54.208 "uuid": "9981b5da-a8a7-40ea-8bb7-9e54a4a92ee1", 00:15:54.208 "assigned_rate_limits": { 00:15:54.208 "rw_ios_per_sec": 0, 00:15:54.208 "rw_mbytes_per_sec": 0, 00:15:54.208 "r_mbytes_per_sec": 0, 00:15:54.208 "w_mbytes_per_sec": 0 00:15:54.208 }, 00:15:54.208 "claimed": false, 00:15:54.208 "zoned": false, 00:15:54.208 "supported_io_types": { 00:15:54.208 "read": true, 00:15:54.208 "write": true, 00:15:54.208 "unmap": true, 00:15:54.208 "flush": true, 00:15:54.208 "reset": true, 00:15:54.208 "nvme_admin": false, 00:15:54.208 "nvme_io": false, 00:15:54.208 "nvme_io_md": false, 00:15:54.208 "write_zeroes": true, 00:15:54.208 "zcopy": false, 00:15:54.208 "get_zone_info": false, 00:15:54.208 "zone_management": false, 00:15:54.208 "zone_append": false, 00:15:54.208 "compare": false, 00:15:54.208 "compare_and_write": false, 00:15:54.208 "abort": false, 00:15:54.208 "seek_hole": false, 00:15:54.208 "seek_data": false, 00:15:54.208 "copy": false, 00:15:54.208 "nvme_iov_md": false 00:15:54.208 }, 00:15:54.208 "memory_domains": [ 00:15:54.208 { 00:15:54.208 "dma_device_id": "system", 00:15:54.208 "dma_device_type": 1 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.208 "dma_device_type": 2 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "system", 00:15:54.208 "dma_device_type": 1 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.208 "dma_device_type": 2 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "system", 00:15:54.208 "dma_device_type": 1 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.208 "dma_device_type": 2 00:15:54.208 } 00:15:54.208 ], 00:15:54.208 "driver_specific": { 00:15:54.208 "raid": { 00:15:54.208 "uuid": "9981b5da-a8a7-40ea-8bb7-9e54a4a92ee1", 00:15:54.208 "strip_size_kb": 64, 00:15:54.208 "state": "online", 00:15:54.208 "raid_level": "concat", 00:15:54.208 "superblock": false, 00:15:54.208 "num_base_bdevs": 3, 00:15:54.208 "num_base_bdevs_discovered": 3, 00:15:54.208 "num_base_bdevs_operational": 3, 00:15:54.208 "base_bdevs_list": [ 00:15:54.208 { 00:15:54.208 "name": "NewBaseBdev", 00:15:54.208 "uuid": "5ae45a07-d4d5-4d56-a0ff-bcf2dc4e9126", 00:15:54.208 "is_configured": true, 00:15:54.208 "data_offset": 0, 00:15:54.208 "data_size": 65536 00:15:54.208 }, 00:15:54.208 { 00:15:54.208 "name": "BaseBdev2", 00:15:54.208 "uuid": "84e162c2-3c81-4a02-aeaf-777b182d0eff", 00:15:54.208 "is_configured": true, 00:15:54.208 "data_offset": 0, 00:15:54.209 "data_size": 65536 00:15:54.209 }, 00:15:54.209 { 00:15:54.209 "name": "BaseBdev3", 00:15:54.209 "uuid": "5238e5b7-8871-440d-948a-707889745162", 00:15:54.209 "is_configured": true, 00:15:54.209 "data_offset": 0, 00:15:54.209 "data_size": 65536 00:15:54.209 } 00:15:54.209 ] 00:15:54.209 } 00:15:54.209 } 00:15:54.209 }' 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:54.209 BaseBdev2 00:15:54.209 BaseBdev3' 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.209 22:57:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.209 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.472 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.473 [2024-12-09 22:57:10.150581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.473 [2024-12-09 22:57:10.150626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.473 [2024-12-09 22:57:10.150758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.473 [2024-12-09 22:57:10.150835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.473 [2024-12-09 22:57:10.150852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66098 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66098 ']' 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66098 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66098 00:15:54.473 killing process with pid 66098 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66098' 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66098 00:15:54.473 [2024-12-09 22:57:10.200185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.473 22:57:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66098 00:15:54.732 [2024-12-09 22:57:10.575840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.111 ************************************ 00:15:56.111 END TEST raid_state_function_test 00:15:56.111 22:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:56.111 00:15:56.111 real 0m11.545s 00:15:56.111 user 0m17.855s 00:15:56.111 sys 0m2.211s 00:15:56.111 22:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.111 22:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.111 ************************************ 00:15:56.371 22:57:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:56.371 22:57:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:56.371 22:57:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.371 22:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.371 ************************************ 00:15:56.371 START TEST raid_state_function_test_sb 00:15:56.371 ************************************ 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:56.371 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:56.372 Process raid pid: 66735 00:15:56.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66735 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66735' 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66735 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66735 ']' 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.372 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:56.372 [2024-12-09 22:57:12.113807] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:15:56.372 [2024-12-09 22:57:12.114027] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.631 [2024-12-09 22:57:12.272989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.631 [2024-12-09 22:57:12.429847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.890 [2024-12-09 22:57:12.687589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.890 [2024-12-09 22:57:12.687765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.150 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.150 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:57.150 22:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:57.150 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.150 22:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.409 [2024-12-09 22:57:13.006743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.409 [2024-12-09 22:57:13.006919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.409 [2024-12-09 22:57:13.006959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.410 [2024-12-09 22:57:13.006977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.410 [2024-12-09 22:57:13.006986] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.410 [2024-12-09 22:57:13.006998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.410 "name": "Existed_Raid", 00:15:57.410 "uuid": "f1ccbe88-2ae7-49b9-9347-bebd4f567ac1", 00:15:57.410 "strip_size_kb": 64, 00:15:57.410 "state": "configuring", 00:15:57.410 "raid_level": "concat", 00:15:57.410 "superblock": true, 00:15:57.410 "num_base_bdevs": 3, 00:15:57.410 "num_base_bdevs_discovered": 0, 00:15:57.410 "num_base_bdevs_operational": 3, 00:15:57.410 "base_bdevs_list": [ 00:15:57.410 { 00:15:57.410 "name": "BaseBdev1", 00:15:57.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.410 "is_configured": false, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 0 00:15:57.410 }, 00:15:57.410 { 00:15:57.410 "name": "BaseBdev2", 00:15:57.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.410 "is_configured": false, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 0 00:15:57.410 }, 00:15:57.410 { 00:15:57.410 "name": "BaseBdev3", 00:15:57.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.410 "is_configured": false, 00:15:57.410 "data_offset": 0, 00:15:57.410 "data_size": 0 00:15:57.410 } 00:15:57.410 ] 00:15:57.410 }' 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.410 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.669 [2024-12-09 22:57:13.517845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.669 [2024-12-09 22:57:13.517989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.669 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.929 [2024-12-09 22:57:13.529838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.929 [2024-12-09 22:57:13.529957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.929 [2024-12-09 22:57:13.529998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.929 [2024-12-09 22:57:13.530027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.929 [2024-12-09 22:57:13.530061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.929 [2024-12-09 22:57:13.530090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.929 [2024-12-09 22:57:13.591152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.929 BaseBdev1 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.929 [ 00:15:57.929 { 00:15:57.929 "name": "BaseBdev1", 00:15:57.929 "aliases": [ 00:15:57.929 "06ffaa22-92f8-4df9-98ab-841ee83123e6" 00:15:57.929 ], 00:15:57.929 "product_name": "Malloc disk", 00:15:57.929 "block_size": 512, 00:15:57.929 "num_blocks": 65536, 00:15:57.929 "uuid": "06ffaa22-92f8-4df9-98ab-841ee83123e6", 00:15:57.929 "assigned_rate_limits": { 00:15:57.929 "rw_ios_per_sec": 0, 00:15:57.929 "rw_mbytes_per_sec": 0, 00:15:57.929 "r_mbytes_per_sec": 0, 00:15:57.929 "w_mbytes_per_sec": 0 00:15:57.929 }, 00:15:57.929 "claimed": true, 00:15:57.929 "claim_type": "exclusive_write", 00:15:57.929 "zoned": false, 00:15:57.929 "supported_io_types": { 00:15:57.929 "read": true, 00:15:57.929 "write": true, 00:15:57.929 "unmap": true, 00:15:57.929 "flush": true, 00:15:57.929 "reset": true, 00:15:57.929 "nvme_admin": false, 00:15:57.929 "nvme_io": false, 00:15:57.929 "nvme_io_md": false, 00:15:57.929 "write_zeroes": true, 00:15:57.929 "zcopy": true, 00:15:57.929 "get_zone_info": false, 00:15:57.929 "zone_management": false, 00:15:57.929 "zone_append": false, 00:15:57.929 "compare": false, 00:15:57.929 "compare_and_write": false, 00:15:57.929 "abort": true, 00:15:57.929 "seek_hole": false, 00:15:57.929 "seek_data": false, 00:15:57.929 "copy": true, 00:15:57.929 "nvme_iov_md": false 00:15:57.929 }, 00:15:57.929 "memory_domains": [ 00:15:57.929 { 00:15:57.929 "dma_device_id": "system", 00:15:57.929 "dma_device_type": 1 00:15:57.929 }, 00:15:57.929 { 00:15:57.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.929 "dma_device_type": 2 00:15:57.929 } 00:15:57.929 ], 00:15:57.929 "driver_specific": {} 00:15:57.929 } 00:15:57.929 ] 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.929 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.930 "name": "Existed_Raid", 00:15:57.930 "uuid": "92993343-11f3-4666-97ba-7d226f421a9b", 00:15:57.930 "strip_size_kb": 64, 00:15:57.930 "state": "configuring", 00:15:57.930 "raid_level": "concat", 00:15:57.930 "superblock": true, 00:15:57.930 "num_base_bdevs": 3, 00:15:57.930 "num_base_bdevs_discovered": 1, 00:15:57.930 "num_base_bdevs_operational": 3, 00:15:57.930 "base_bdevs_list": [ 00:15:57.930 { 00:15:57.930 "name": "BaseBdev1", 00:15:57.930 "uuid": "06ffaa22-92f8-4df9-98ab-841ee83123e6", 00:15:57.930 "is_configured": true, 00:15:57.930 "data_offset": 2048, 00:15:57.930 "data_size": 63488 00:15:57.930 }, 00:15:57.930 { 00:15:57.930 "name": "BaseBdev2", 00:15:57.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.930 "is_configured": false, 00:15:57.930 "data_offset": 0, 00:15:57.930 "data_size": 0 00:15:57.930 }, 00:15:57.930 { 00:15:57.930 "name": "BaseBdev3", 00:15:57.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.930 "is_configured": false, 00:15:57.930 "data_offset": 0, 00:15:57.930 "data_size": 0 00:15:57.930 } 00:15:57.930 ] 00:15:57.930 }' 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.930 22:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.499 [2024-12-09 22:57:14.094443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.499 [2024-12-09 22:57:14.094533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.499 [2024-12-09 22:57:14.106478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.499 [2024-12-09 22:57:14.108905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.499 [2024-12-09 22:57:14.108958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.499 [2024-12-09 22:57:14.108971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.499 [2024-12-09 22:57:14.108982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.499 "name": "Existed_Raid", 00:15:58.499 "uuid": "073869fc-1fa9-435e-bfb5-556d65a7601d", 00:15:58.499 "strip_size_kb": 64, 00:15:58.499 "state": "configuring", 00:15:58.499 "raid_level": "concat", 00:15:58.499 "superblock": true, 00:15:58.499 "num_base_bdevs": 3, 00:15:58.499 "num_base_bdevs_discovered": 1, 00:15:58.499 "num_base_bdevs_operational": 3, 00:15:58.499 "base_bdevs_list": [ 00:15:58.499 { 00:15:58.499 "name": "BaseBdev1", 00:15:58.499 "uuid": "06ffaa22-92f8-4df9-98ab-841ee83123e6", 00:15:58.499 "is_configured": true, 00:15:58.499 "data_offset": 2048, 00:15:58.499 "data_size": 63488 00:15:58.499 }, 00:15:58.499 { 00:15:58.499 "name": "BaseBdev2", 00:15:58.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.499 "is_configured": false, 00:15:58.499 "data_offset": 0, 00:15:58.499 "data_size": 0 00:15:58.499 }, 00:15:58.499 { 00:15:58.499 "name": "BaseBdev3", 00:15:58.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.499 "is_configured": false, 00:15:58.499 "data_offset": 0, 00:15:58.499 "data_size": 0 00:15:58.499 } 00:15:58.499 ] 00:15:58.499 }' 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.499 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.776 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:58.776 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.776 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.054 [2024-12-09 22:57:14.624889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.054 BaseBdev2 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.054 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.054 [ 00:15:59.054 { 00:15:59.054 "name": "BaseBdev2", 00:15:59.054 "aliases": [ 00:15:59.054 "a1871f51-6cd4-4077-b5f2-b860327e8769" 00:15:59.054 ], 00:15:59.054 "product_name": "Malloc disk", 00:15:59.054 "block_size": 512, 00:15:59.054 "num_blocks": 65536, 00:15:59.054 "uuid": "a1871f51-6cd4-4077-b5f2-b860327e8769", 00:15:59.054 "assigned_rate_limits": { 00:15:59.054 "rw_ios_per_sec": 0, 00:15:59.055 "rw_mbytes_per_sec": 0, 00:15:59.055 "r_mbytes_per_sec": 0, 00:15:59.055 "w_mbytes_per_sec": 0 00:15:59.055 }, 00:15:59.055 "claimed": true, 00:15:59.055 "claim_type": "exclusive_write", 00:15:59.055 "zoned": false, 00:15:59.055 "supported_io_types": { 00:15:59.055 "read": true, 00:15:59.055 "write": true, 00:15:59.055 "unmap": true, 00:15:59.055 "flush": true, 00:15:59.055 "reset": true, 00:15:59.055 "nvme_admin": false, 00:15:59.055 "nvme_io": false, 00:15:59.055 "nvme_io_md": false, 00:15:59.055 "write_zeroes": true, 00:15:59.055 "zcopy": true, 00:15:59.055 "get_zone_info": false, 00:15:59.055 "zone_management": false, 00:15:59.055 "zone_append": false, 00:15:59.055 "compare": false, 00:15:59.055 "compare_and_write": false, 00:15:59.055 "abort": true, 00:15:59.055 "seek_hole": false, 00:15:59.055 "seek_data": false, 00:15:59.055 "copy": true, 00:15:59.055 "nvme_iov_md": false 00:15:59.055 }, 00:15:59.055 "memory_domains": [ 00:15:59.055 { 00:15:59.055 "dma_device_id": "system", 00:15:59.055 "dma_device_type": 1 00:15:59.055 }, 00:15:59.055 { 00:15:59.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.055 "dma_device_type": 2 00:15:59.055 } 00:15:59.055 ], 00:15:59.055 "driver_specific": {} 00:15:59.055 } 00:15:59.055 ] 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.055 "name": "Existed_Raid", 00:15:59.055 "uuid": "073869fc-1fa9-435e-bfb5-556d65a7601d", 00:15:59.055 "strip_size_kb": 64, 00:15:59.055 "state": "configuring", 00:15:59.055 "raid_level": "concat", 00:15:59.055 "superblock": true, 00:15:59.055 "num_base_bdevs": 3, 00:15:59.055 "num_base_bdevs_discovered": 2, 00:15:59.055 "num_base_bdevs_operational": 3, 00:15:59.055 "base_bdevs_list": [ 00:15:59.055 { 00:15:59.055 "name": "BaseBdev1", 00:15:59.055 "uuid": "06ffaa22-92f8-4df9-98ab-841ee83123e6", 00:15:59.055 "is_configured": true, 00:15:59.055 "data_offset": 2048, 00:15:59.055 "data_size": 63488 00:15:59.055 }, 00:15:59.055 { 00:15:59.055 "name": "BaseBdev2", 00:15:59.055 "uuid": "a1871f51-6cd4-4077-b5f2-b860327e8769", 00:15:59.055 "is_configured": true, 00:15:59.055 "data_offset": 2048, 00:15:59.055 "data_size": 63488 00:15:59.055 }, 00:15:59.055 { 00:15:59.055 "name": "BaseBdev3", 00:15:59.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.055 "is_configured": false, 00:15:59.055 "data_offset": 0, 00:15:59.055 "data_size": 0 00:15:59.055 } 00:15:59.055 ] 00:15:59.055 }' 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.055 22:57:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.314 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:59.314 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.314 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 [2024-12-09 22:57:15.197111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.574 [2024-12-09 22:57:15.197581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:59.574 [2024-12-09 22:57:15.197626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:59.574 [2024-12-09 22:57:15.197948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:59.574 [2024-12-09 22:57:15.198126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:59.574 [2024-12-09 22:57:15.198138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:59.574 BaseBdev3 00:15:59.574 [2024-12-09 22:57:15.198306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 [ 00:15:59.574 { 00:15:59.574 "name": "BaseBdev3", 00:15:59.574 "aliases": [ 00:15:59.574 "b0ff3ac1-6827-4d2f-9435-695920dfa1f8" 00:15:59.574 ], 00:15:59.574 "product_name": "Malloc disk", 00:15:59.574 "block_size": 512, 00:15:59.574 "num_blocks": 65536, 00:15:59.574 "uuid": "b0ff3ac1-6827-4d2f-9435-695920dfa1f8", 00:15:59.574 "assigned_rate_limits": { 00:15:59.574 "rw_ios_per_sec": 0, 00:15:59.574 "rw_mbytes_per_sec": 0, 00:15:59.574 "r_mbytes_per_sec": 0, 00:15:59.574 "w_mbytes_per_sec": 0 00:15:59.574 }, 00:15:59.574 "claimed": true, 00:15:59.574 "claim_type": "exclusive_write", 00:15:59.574 "zoned": false, 00:15:59.574 "supported_io_types": { 00:15:59.574 "read": true, 00:15:59.574 "write": true, 00:15:59.574 "unmap": true, 00:15:59.574 "flush": true, 00:15:59.574 "reset": true, 00:15:59.574 "nvme_admin": false, 00:15:59.574 "nvme_io": false, 00:15:59.574 "nvme_io_md": false, 00:15:59.574 "write_zeroes": true, 00:15:59.574 "zcopy": true, 00:15:59.574 "get_zone_info": false, 00:15:59.574 "zone_management": false, 00:15:59.574 "zone_append": false, 00:15:59.574 "compare": false, 00:15:59.574 "compare_and_write": false, 00:15:59.574 "abort": true, 00:15:59.574 "seek_hole": false, 00:15:59.574 "seek_data": false, 00:15:59.574 "copy": true, 00:15:59.574 "nvme_iov_md": false 00:15:59.574 }, 00:15:59.574 "memory_domains": [ 00:15:59.574 { 00:15:59.574 "dma_device_id": "system", 00:15:59.574 "dma_device_type": 1 00:15:59.574 }, 00:15:59.574 { 00:15:59.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.574 "dma_device_type": 2 00:15:59.574 } 00:15:59.574 ], 00:15:59.574 "driver_specific": {} 00:15:59.574 } 00:15:59.574 ] 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.574 "name": "Existed_Raid", 00:15:59.574 "uuid": "073869fc-1fa9-435e-bfb5-556d65a7601d", 00:15:59.574 "strip_size_kb": 64, 00:15:59.574 "state": "online", 00:15:59.574 "raid_level": "concat", 00:15:59.574 "superblock": true, 00:15:59.574 "num_base_bdevs": 3, 00:15:59.574 "num_base_bdevs_discovered": 3, 00:15:59.574 "num_base_bdevs_operational": 3, 00:15:59.574 "base_bdevs_list": [ 00:15:59.574 { 00:15:59.574 "name": "BaseBdev1", 00:15:59.574 "uuid": "06ffaa22-92f8-4df9-98ab-841ee83123e6", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 }, 00:15:59.574 { 00:15:59.574 "name": "BaseBdev2", 00:15:59.574 "uuid": "a1871f51-6cd4-4077-b5f2-b860327e8769", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 }, 00:15:59.574 { 00:15:59.574 "name": "BaseBdev3", 00:15:59.574 "uuid": "b0ff3ac1-6827-4d2f-9435-695920dfa1f8", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 } 00:15:59.574 ] 00:15:59.574 }' 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.574 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.833 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.833 [2024-12-09 22:57:15.688873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.092 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.092 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.092 "name": "Existed_Raid", 00:16:00.092 "aliases": [ 00:16:00.092 "073869fc-1fa9-435e-bfb5-556d65a7601d" 00:16:00.092 ], 00:16:00.092 "product_name": "Raid Volume", 00:16:00.092 "block_size": 512, 00:16:00.093 "num_blocks": 190464, 00:16:00.093 "uuid": "073869fc-1fa9-435e-bfb5-556d65a7601d", 00:16:00.093 "assigned_rate_limits": { 00:16:00.093 "rw_ios_per_sec": 0, 00:16:00.093 "rw_mbytes_per_sec": 0, 00:16:00.093 "r_mbytes_per_sec": 0, 00:16:00.093 "w_mbytes_per_sec": 0 00:16:00.093 }, 00:16:00.093 "claimed": false, 00:16:00.093 "zoned": false, 00:16:00.093 "supported_io_types": { 00:16:00.093 "read": true, 00:16:00.093 "write": true, 00:16:00.093 "unmap": true, 00:16:00.093 "flush": true, 00:16:00.093 "reset": true, 00:16:00.093 "nvme_admin": false, 00:16:00.093 "nvme_io": false, 00:16:00.093 "nvme_io_md": false, 00:16:00.093 "write_zeroes": true, 00:16:00.093 "zcopy": false, 00:16:00.093 "get_zone_info": false, 00:16:00.093 "zone_management": false, 00:16:00.093 "zone_append": false, 00:16:00.093 "compare": false, 00:16:00.093 "compare_and_write": false, 00:16:00.093 "abort": false, 00:16:00.093 "seek_hole": false, 00:16:00.093 "seek_data": false, 00:16:00.093 "copy": false, 00:16:00.093 "nvme_iov_md": false 00:16:00.093 }, 00:16:00.093 "memory_domains": [ 00:16:00.093 { 00:16:00.093 "dma_device_id": "system", 00:16:00.093 "dma_device_type": 1 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.093 "dma_device_type": 2 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "dma_device_id": "system", 00:16:00.093 "dma_device_type": 1 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.093 "dma_device_type": 2 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "dma_device_id": "system", 00:16:00.093 "dma_device_type": 1 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.093 "dma_device_type": 2 00:16:00.093 } 00:16:00.093 ], 00:16:00.093 "driver_specific": { 00:16:00.093 "raid": { 00:16:00.093 "uuid": "073869fc-1fa9-435e-bfb5-556d65a7601d", 00:16:00.093 "strip_size_kb": 64, 00:16:00.093 "state": "online", 00:16:00.093 "raid_level": "concat", 00:16:00.093 "superblock": true, 00:16:00.093 "num_base_bdevs": 3, 00:16:00.093 "num_base_bdevs_discovered": 3, 00:16:00.093 "num_base_bdevs_operational": 3, 00:16:00.093 "base_bdevs_list": [ 00:16:00.093 { 00:16:00.093 "name": "BaseBdev1", 00:16:00.093 "uuid": "06ffaa22-92f8-4df9-98ab-841ee83123e6", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": "BaseBdev2", 00:16:00.093 "uuid": "a1871f51-6cd4-4077-b5f2-b860327e8769", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": "BaseBdev3", 00:16:00.093 "uuid": "b0ff3ac1-6827-4d2f-9435-695920dfa1f8", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 } 00:16:00.093 ] 00:16:00.093 } 00:16:00.093 } 00:16:00.093 }' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:00.093 BaseBdev2 00:16:00.093 BaseBdev3' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.093 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.353 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.353 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.353 22:57:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:00.353 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.353 22:57:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.353 [2024-12-09 22:57:15.984025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.353 [2024-12-09 22:57:15.984079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.353 [2024-12-09 22:57:15.984149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.353 "name": "Existed_Raid", 00:16:00.353 "uuid": "073869fc-1fa9-435e-bfb5-556d65a7601d", 00:16:00.353 "strip_size_kb": 64, 00:16:00.353 "state": "offline", 00:16:00.353 "raid_level": "concat", 00:16:00.353 "superblock": true, 00:16:00.353 "num_base_bdevs": 3, 00:16:00.353 "num_base_bdevs_discovered": 2, 00:16:00.353 "num_base_bdevs_operational": 2, 00:16:00.353 "base_bdevs_list": [ 00:16:00.353 { 00:16:00.353 "name": null, 00:16:00.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.353 "is_configured": false, 00:16:00.353 "data_offset": 0, 00:16:00.353 "data_size": 63488 00:16:00.353 }, 00:16:00.353 { 00:16:00.353 "name": "BaseBdev2", 00:16:00.353 "uuid": "a1871f51-6cd4-4077-b5f2-b860327e8769", 00:16:00.353 "is_configured": true, 00:16:00.353 "data_offset": 2048, 00:16:00.353 "data_size": 63488 00:16:00.353 }, 00:16:00.353 { 00:16:00.353 "name": "BaseBdev3", 00:16:00.353 "uuid": "b0ff3ac1-6827-4d2f-9435-695920dfa1f8", 00:16:00.353 "is_configured": true, 00:16:00.353 "data_offset": 2048, 00:16:00.353 "data_size": 63488 00:16:00.353 } 00:16:00.353 ] 00:16:00.353 }' 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.353 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.922 [2024-12-09 22:57:16.574954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.922 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.922 [2024-12-09 22:57:16.756268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.922 [2024-12-09 22:57:16.756355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.181 BaseBdev2 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.181 22:57:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.181 [ 00:16:01.181 { 00:16:01.181 "name": "BaseBdev2", 00:16:01.181 "aliases": [ 00:16:01.181 "5373e967-d54d-4a3b-b2b9-21523a6189a0" 00:16:01.181 ], 00:16:01.181 "product_name": "Malloc disk", 00:16:01.181 "block_size": 512, 00:16:01.181 "num_blocks": 65536, 00:16:01.181 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:01.181 "assigned_rate_limits": { 00:16:01.181 "rw_ios_per_sec": 0, 00:16:01.181 "rw_mbytes_per_sec": 0, 00:16:01.181 "r_mbytes_per_sec": 0, 00:16:01.181 "w_mbytes_per_sec": 0 00:16:01.181 }, 00:16:01.181 "claimed": false, 00:16:01.181 "zoned": false, 00:16:01.181 "supported_io_types": { 00:16:01.181 "read": true, 00:16:01.181 "write": true, 00:16:01.181 "unmap": true, 00:16:01.181 "flush": true, 00:16:01.181 "reset": true, 00:16:01.181 "nvme_admin": false, 00:16:01.181 "nvme_io": false, 00:16:01.181 "nvme_io_md": false, 00:16:01.181 "write_zeroes": true, 00:16:01.181 "zcopy": true, 00:16:01.181 "get_zone_info": false, 00:16:01.181 "zone_management": false, 00:16:01.181 "zone_append": false, 00:16:01.181 "compare": false, 00:16:01.181 "compare_and_write": false, 00:16:01.181 "abort": true, 00:16:01.181 "seek_hole": false, 00:16:01.181 "seek_data": false, 00:16:01.181 "copy": true, 00:16:01.181 "nvme_iov_md": false 00:16:01.181 }, 00:16:01.181 "memory_domains": [ 00:16:01.181 { 00:16:01.181 "dma_device_id": "system", 00:16:01.181 "dma_device_type": 1 00:16:01.181 }, 00:16:01.181 { 00:16:01.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.181 "dma_device_type": 2 00:16:01.181 } 00:16:01.181 ], 00:16:01.181 "driver_specific": {} 00:16:01.181 } 00:16:01.181 ] 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.181 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 BaseBdev3 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 [ 00:16:01.441 { 00:16:01.441 "name": "BaseBdev3", 00:16:01.441 "aliases": [ 00:16:01.441 "59fe0c8e-e796-48f6-9063-4e1da646f345" 00:16:01.441 ], 00:16:01.441 "product_name": "Malloc disk", 00:16:01.441 "block_size": 512, 00:16:01.441 "num_blocks": 65536, 00:16:01.441 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:01.441 "assigned_rate_limits": { 00:16:01.441 "rw_ios_per_sec": 0, 00:16:01.441 "rw_mbytes_per_sec": 0, 00:16:01.441 "r_mbytes_per_sec": 0, 00:16:01.441 "w_mbytes_per_sec": 0 00:16:01.441 }, 00:16:01.441 "claimed": false, 00:16:01.441 "zoned": false, 00:16:01.441 "supported_io_types": { 00:16:01.441 "read": true, 00:16:01.441 "write": true, 00:16:01.441 "unmap": true, 00:16:01.441 "flush": true, 00:16:01.441 "reset": true, 00:16:01.441 "nvme_admin": false, 00:16:01.441 "nvme_io": false, 00:16:01.441 "nvme_io_md": false, 00:16:01.441 "write_zeroes": true, 00:16:01.441 "zcopy": true, 00:16:01.441 "get_zone_info": false, 00:16:01.441 "zone_management": false, 00:16:01.441 "zone_append": false, 00:16:01.441 "compare": false, 00:16:01.441 "compare_and_write": false, 00:16:01.441 "abort": true, 00:16:01.441 "seek_hole": false, 00:16:01.441 "seek_data": false, 00:16:01.441 "copy": true, 00:16:01.441 "nvme_iov_md": false 00:16:01.441 }, 00:16:01.441 "memory_domains": [ 00:16:01.441 { 00:16:01.441 "dma_device_id": "system", 00:16:01.441 "dma_device_type": 1 00:16:01.441 }, 00:16:01.441 { 00:16:01.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.441 "dma_device_type": 2 00:16:01.441 } 00:16:01.441 ], 00:16:01.441 "driver_specific": {} 00:16:01.441 } 00:16:01.441 ] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 [2024-12-09 22:57:17.122278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.441 [2024-12-09 22:57:17.122512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.441 [2024-12-09 22:57:17.122584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.441 [2024-12-09 22:57:17.125077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.441 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.441 "name": "Existed_Raid", 00:16:01.441 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:01.441 "strip_size_kb": 64, 00:16:01.441 "state": "configuring", 00:16:01.441 "raid_level": "concat", 00:16:01.441 "superblock": true, 00:16:01.441 "num_base_bdevs": 3, 00:16:01.441 "num_base_bdevs_discovered": 2, 00:16:01.441 "num_base_bdevs_operational": 3, 00:16:01.441 "base_bdevs_list": [ 00:16:01.441 { 00:16:01.441 "name": "BaseBdev1", 00:16:01.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.441 "is_configured": false, 00:16:01.441 "data_offset": 0, 00:16:01.441 "data_size": 0 00:16:01.442 }, 00:16:01.442 { 00:16:01.442 "name": "BaseBdev2", 00:16:01.442 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:01.442 "is_configured": true, 00:16:01.442 "data_offset": 2048, 00:16:01.442 "data_size": 63488 00:16:01.442 }, 00:16:01.442 { 00:16:01.442 "name": "BaseBdev3", 00:16:01.442 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:01.442 "is_configured": true, 00:16:01.442 "data_offset": 2048, 00:16:01.442 "data_size": 63488 00:16:01.442 } 00:16:01.442 ] 00:16:01.442 }' 00:16:01.442 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.442 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.009 [2024-12-09 22:57:17.581572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.009 "name": "Existed_Raid", 00:16:02.009 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:02.009 "strip_size_kb": 64, 00:16:02.009 "state": "configuring", 00:16:02.009 "raid_level": "concat", 00:16:02.009 "superblock": true, 00:16:02.009 "num_base_bdevs": 3, 00:16:02.009 "num_base_bdevs_discovered": 1, 00:16:02.009 "num_base_bdevs_operational": 3, 00:16:02.009 "base_bdevs_list": [ 00:16:02.009 { 00:16:02.009 "name": "BaseBdev1", 00:16:02.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.009 "is_configured": false, 00:16:02.009 "data_offset": 0, 00:16:02.009 "data_size": 0 00:16:02.009 }, 00:16:02.009 { 00:16:02.009 "name": null, 00:16:02.009 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:02.009 "is_configured": false, 00:16:02.009 "data_offset": 0, 00:16:02.009 "data_size": 63488 00:16:02.009 }, 00:16:02.009 { 00:16:02.009 "name": "BaseBdev3", 00:16:02.009 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:02.009 "is_configured": true, 00:16:02.009 "data_offset": 2048, 00:16:02.009 "data_size": 63488 00:16:02.009 } 00:16:02.009 ] 00:16:02.009 }' 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.009 22:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.267 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.525 [2024-12-09 22:57:18.133312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.525 BaseBdev1 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.525 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.525 [ 00:16:02.525 { 00:16:02.525 "name": "BaseBdev1", 00:16:02.525 "aliases": [ 00:16:02.525 "3047dee8-34c8-4f5c-a374-ea0e51266477" 00:16:02.525 ], 00:16:02.525 "product_name": "Malloc disk", 00:16:02.525 "block_size": 512, 00:16:02.525 "num_blocks": 65536, 00:16:02.525 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:02.525 "assigned_rate_limits": { 00:16:02.525 "rw_ios_per_sec": 0, 00:16:02.525 "rw_mbytes_per_sec": 0, 00:16:02.525 "r_mbytes_per_sec": 0, 00:16:02.525 "w_mbytes_per_sec": 0 00:16:02.525 }, 00:16:02.525 "claimed": true, 00:16:02.525 "claim_type": "exclusive_write", 00:16:02.525 "zoned": false, 00:16:02.525 "supported_io_types": { 00:16:02.525 "read": true, 00:16:02.525 "write": true, 00:16:02.525 "unmap": true, 00:16:02.525 "flush": true, 00:16:02.525 "reset": true, 00:16:02.525 "nvme_admin": false, 00:16:02.525 "nvme_io": false, 00:16:02.525 "nvme_io_md": false, 00:16:02.525 "write_zeroes": true, 00:16:02.525 "zcopy": true, 00:16:02.525 "get_zone_info": false, 00:16:02.525 "zone_management": false, 00:16:02.525 "zone_append": false, 00:16:02.525 "compare": false, 00:16:02.525 "compare_and_write": false, 00:16:02.526 "abort": true, 00:16:02.526 "seek_hole": false, 00:16:02.526 "seek_data": false, 00:16:02.526 "copy": true, 00:16:02.526 "nvme_iov_md": false 00:16:02.526 }, 00:16:02.526 "memory_domains": [ 00:16:02.526 { 00:16:02.526 "dma_device_id": "system", 00:16:02.526 "dma_device_type": 1 00:16:02.526 }, 00:16:02.526 { 00:16:02.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.526 "dma_device_type": 2 00:16:02.526 } 00:16:02.526 ], 00:16:02.526 "driver_specific": {} 00:16:02.526 } 00:16:02.526 ] 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.526 "name": "Existed_Raid", 00:16:02.526 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:02.526 "strip_size_kb": 64, 00:16:02.526 "state": "configuring", 00:16:02.526 "raid_level": "concat", 00:16:02.526 "superblock": true, 00:16:02.526 "num_base_bdevs": 3, 00:16:02.526 "num_base_bdevs_discovered": 2, 00:16:02.526 "num_base_bdevs_operational": 3, 00:16:02.526 "base_bdevs_list": [ 00:16:02.526 { 00:16:02.526 "name": "BaseBdev1", 00:16:02.526 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:02.526 "is_configured": true, 00:16:02.526 "data_offset": 2048, 00:16:02.526 "data_size": 63488 00:16:02.526 }, 00:16:02.526 { 00:16:02.526 "name": null, 00:16:02.526 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:02.526 "is_configured": false, 00:16:02.526 "data_offset": 0, 00:16:02.526 "data_size": 63488 00:16:02.526 }, 00:16:02.526 { 00:16:02.526 "name": "BaseBdev3", 00:16:02.526 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:02.526 "is_configured": true, 00:16:02.526 "data_offset": 2048, 00:16:02.526 "data_size": 63488 00:16:02.526 } 00:16:02.526 ] 00:16:02.526 }' 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.526 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.784 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.784 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.784 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.784 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.044 [2024-12-09 22:57:18.676633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.044 "name": "Existed_Raid", 00:16:03.044 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:03.044 "strip_size_kb": 64, 00:16:03.044 "state": "configuring", 00:16:03.044 "raid_level": "concat", 00:16:03.044 "superblock": true, 00:16:03.044 "num_base_bdevs": 3, 00:16:03.044 "num_base_bdevs_discovered": 1, 00:16:03.044 "num_base_bdevs_operational": 3, 00:16:03.044 "base_bdevs_list": [ 00:16:03.044 { 00:16:03.044 "name": "BaseBdev1", 00:16:03.044 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:03.044 "is_configured": true, 00:16:03.044 "data_offset": 2048, 00:16:03.044 "data_size": 63488 00:16:03.044 }, 00:16:03.044 { 00:16:03.044 "name": null, 00:16:03.044 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:03.044 "is_configured": false, 00:16:03.044 "data_offset": 0, 00:16:03.044 "data_size": 63488 00:16:03.044 }, 00:16:03.044 { 00:16:03.044 "name": null, 00:16:03.044 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:03.044 "is_configured": false, 00:16:03.044 "data_offset": 0, 00:16:03.044 "data_size": 63488 00:16:03.044 } 00:16:03.044 ] 00:16:03.044 }' 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.044 22:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.302 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.302 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.302 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.302 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.585 [2024-12-09 22:57:19.199855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.585 "name": "Existed_Raid", 00:16:03.585 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:03.585 "strip_size_kb": 64, 00:16:03.585 "state": "configuring", 00:16:03.585 "raid_level": "concat", 00:16:03.585 "superblock": true, 00:16:03.585 "num_base_bdevs": 3, 00:16:03.585 "num_base_bdevs_discovered": 2, 00:16:03.585 "num_base_bdevs_operational": 3, 00:16:03.585 "base_bdevs_list": [ 00:16:03.585 { 00:16:03.585 "name": "BaseBdev1", 00:16:03.585 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:03.585 "is_configured": true, 00:16:03.585 "data_offset": 2048, 00:16:03.585 "data_size": 63488 00:16:03.585 }, 00:16:03.585 { 00:16:03.585 "name": null, 00:16:03.585 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:03.585 "is_configured": false, 00:16:03.585 "data_offset": 0, 00:16:03.585 "data_size": 63488 00:16:03.585 }, 00:16:03.585 { 00:16:03.585 "name": "BaseBdev3", 00:16:03.585 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:03.585 "is_configured": true, 00:16:03.585 "data_offset": 2048, 00:16:03.585 "data_size": 63488 00:16:03.585 } 00:16:03.585 ] 00:16:03.585 }' 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.585 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.864 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.864 [2024-12-09 22:57:19.711037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.123 "name": "Existed_Raid", 00:16:04.123 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:04.123 "strip_size_kb": 64, 00:16:04.123 "state": "configuring", 00:16:04.123 "raid_level": "concat", 00:16:04.123 "superblock": true, 00:16:04.123 "num_base_bdevs": 3, 00:16:04.123 "num_base_bdevs_discovered": 1, 00:16:04.123 "num_base_bdevs_operational": 3, 00:16:04.123 "base_bdevs_list": [ 00:16:04.123 { 00:16:04.123 "name": null, 00:16:04.123 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:04.123 "is_configured": false, 00:16:04.123 "data_offset": 0, 00:16:04.123 "data_size": 63488 00:16:04.123 }, 00:16:04.123 { 00:16:04.123 "name": null, 00:16:04.123 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:04.123 "is_configured": false, 00:16:04.123 "data_offset": 0, 00:16:04.123 "data_size": 63488 00:16:04.123 }, 00:16:04.123 { 00:16:04.123 "name": "BaseBdev3", 00:16:04.123 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:04.123 "is_configured": true, 00:16:04.123 "data_offset": 2048, 00:16:04.123 "data_size": 63488 00:16:04.123 } 00:16:04.123 ] 00:16:04.123 }' 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.123 22:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.689 [2024-12-09 22:57:20.334142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.689 "name": "Existed_Raid", 00:16:04.689 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:04.689 "strip_size_kb": 64, 00:16:04.689 "state": "configuring", 00:16:04.689 "raid_level": "concat", 00:16:04.689 "superblock": true, 00:16:04.689 "num_base_bdevs": 3, 00:16:04.689 "num_base_bdevs_discovered": 2, 00:16:04.689 "num_base_bdevs_operational": 3, 00:16:04.689 "base_bdevs_list": [ 00:16:04.689 { 00:16:04.689 "name": null, 00:16:04.689 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:04.689 "is_configured": false, 00:16:04.689 "data_offset": 0, 00:16:04.689 "data_size": 63488 00:16:04.689 }, 00:16:04.689 { 00:16:04.689 "name": "BaseBdev2", 00:16:04.689 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:04.689 "is_configured": true, 00:16:04.689 "data_offset": 2048, 00:16:04.689 "data_size": 63488 00:16:04.689 }, 00:16:04.689 { 00:16:04.689 "name": "BaseBdev3", 00:16:04.689 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:04.689 "is_configured": true, 00:16:04.689 "data_offset": 2048, 00:16:04.689 "data_size": 63488 00:16:04.689 } 00:16:04.689 ] 00:16:04.689 }' 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.689 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.949 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.949 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.949 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.949 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:04.949 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3047dee8-34c8-4f5c-a374-ea0e51266477 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.237 [2024-12-09 22:57:20.928980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:05.237 [2024-12-09 22:57:20.929300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:05.237 [2024-12-09 22:57:20.929321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:05.237 [2024-12-09 22:57:20.929679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:05.237 [2024-12-09 22:57:20.929859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:05.237 [2024-12-09 22:57:20.929878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:05.237 NewBaseBdev 00:16:05.237 [2024-12-09 22:57:20.930042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.237 [ 00:16:05.237 { 00:16:05.237 "name": "NewBaseBdev", 00:16:05.237 "aliases": [ 00:16:05.237 "3047dee8-34c8-4f5c-a374-ea0e51266477" 00:16:05.237 ], 00:16:05.237 "product_name": "Malloc disk", 00:16:05.237 "block_size": 512, 00:16:05.237 "num_blocks": 65536, 00:16:05.237 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:05.237 "assigned_rate_limits": { 00:16:05.237 "rw_ios_per_sec": 0, 00:16:05.237 "rw_mbytes_per_sec": 0, 00:16:05.237 "r_mbytes_per_sec": 0, 00:16:05.237 "w_mbytes_per_sec": 0 00:16:05.237 }, 00:16:05.237 "claimed": true, 00:16:05.237 "claim_type": "exclusive_write", 00:16:05.237 "zoned": false, 00:16:05.237 "supported_io_types": { 00:16:05.237 "read": true, 00:16:05.237 "write": true, 00:16:05.237 "unmap": true, 00:16:05.237 "flush": true, 00:16:05.237 "reset": true, 00:16:05.237 "nvme_admin": false, 00:16:05.237 "nvme_io": false, 00:16:05.237 "nvme_io_md": false, 00:16:05.237 "write_zeroes": true, 00:16:05.237 "zcopy": true, 00:16:05.237 "get_zone_info": false, 00:16:05.237 "zone_management": false, 00:16:05.237 "zone_append": false, 00:16:05.237 "compare": false, 00:16:05.237 "compare_and_write": false, 00:16:05.237 "abort": true, 00:16:05.237 "seek_hole": false, 00:16:05.237 "seek_data": false, 00:16:05.237 "copy": true, 00:16:05.237 "nvme_iov_md": false 00:16:05.237 }, 00:16:05.237 "memory_domains": [ 00:16:05.237 { 00:16:05.237 "dma_device_id": "system", 00:16:05.237 "dma_device_type": 1 00:16:05.237 }, 00:16:05.237 { 00:16:05.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.237 "dma_device_type": 2 00:16:05.237 } 00:16:05.237 ], 00:16:05.237 "driver_specific": {} 00:16:05.237 } 00:16:05.237 ] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.237 22:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.237 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.237 "name": "Existed_Raid", 00:16:05.237 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:05.237 "strip_size_kb": 64, 00:16:05.237 "state": "online", 00:16:05.237 "raid_level": "concat", 00:16:05.237 "superblock": true, 00:16:05.237 "num_base_bdevs": 3, 00:16:05.237 "num_base_bdevs_discovered": 3, 00:16:05.237 "num_base_bdevs_operational": 3, 00:16:05.237 "base_bdevs_list": [ 00:16:05.237 { 00:16:05.237 "name": "NewBaseBdev", 00:16:05.237 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:05.237 "is_configured": true, 00:16:05.237 "data_offset": 2048, 00:16:05.237 "data_size": 63488 00:16:05.237 }, 00:16:05.237 { 00:16:05.237 "name": "BaseBdev2", 00:16:05.237 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:05.237 "is_configured": true, 00:16:05.237 "data_offset": 2048, 00:16:05.237 "data_size": 63488 00:16:05.237 }, 00:16:05.237 { 00:16:05.237 "name": "BaseBdev3", 00:16:05.237 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:05.237 "is_configured": true, 00:16:05.237 "data_offset": 2048, 00:16:05.237 "data_size": 63488 00:16:05.237 } 00:16:05.237 ] 00:16:05.237 }' 00:16:05.237 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.237 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 [2024-12-09 22:57:21.456847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:05.805 "name": "Existed_Raid", 00:16:05.805 "aliases": [ 00:16:05.805 "872dbcea-60c6-44a9-8ab7-7bac0756615d" 00:16:05.805 ], 00:16:05.805 "product_name": "Raid Volume", 00:16:05.805 "block_size": 512, 00:16:05.805 "num_blocks": 190464, 00:16:05.805 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:05.805 "assigned_rate_limits": { 00:16:05.805 "rw_ios_per_sec": 0, 00:16:05.805 "rw_mbytes_per_sec": 0, 00:16:05.805 "r_mbytes_per_sec": 0, 00:16:05.805 "w_mbytes_per_sec": 0 00:16:05.805 }, 00:16:05.805 "claimed": false, 00:16:05.805 "zoned": false, 00:16:05.805 "supported_io_types": { 00:16:05.805 "read": true, 00:16:05.805 "write": true, 00:16:05.805 "unmap": true, 00:16:05.805 "flush": true, 00:16:05.805 "reset": true, 00:16:05.805 "nvme_admin": false, 00:16:05.805 "nvme_io": false, 00:16:05.805 "nvme_io_md": false, 00:16:05.805 "write_zeroes": true, 00:16:05.805 "zcopy": false, 00:16:05.805 "get_zone_info": false, 00:16:05.805 "zone_management": false, 00:16:05.805 "zone_append": false, 00:16:05.805 "compare": false, 00:16:05.805 "compare_and_write": false, 00:16:05.805 "abort": false, 00:16:05.805 "seek_hole": false, 00:16:05.805 "seek_data": false, 00:16:05.805 "copy": false, 00:16:05.805 "nvme_iov_md": false 00:16:05.805 }, 00:16:05.805 "memory_domains": [ 00:16:05.805 { 00:16:05.805 "dma_device_id": "system", 00:16:05.805 "dma_device_type": 1 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.805 "dma_device_type": 2 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "dma_device_id": "system", 00:16:05.805 "dma_device_type": 1 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.805 "dma_device_type": 2 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "dma_device_id": "system", 00:16:05.805 "dma_device_type": 1 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.805 "dma_device_type": 2 00:16:05.805 } 00:16:05.805 ], 00:16:05.805 "driver_specific": { 00:16:05.805 "raid": { 00:16:05.805 "uuid": "872dbcea-60c6-44a9-8ab7-7bac0756615d", 00:16:05.805 "strip_size_kb": 64, 00:16:05.805 "state": "online", 00:16:05.805 "raid_level": "concat", 00:16:05.805 "superblock": true, 00:16:05.805 "num_base_bdevs": 3, 00:16:05.805 "num_base_bdevs_discovered": 3, 00:16:05.805 "num_base_bdevs_operational": 3, 00:16:05.805 "base_bdevs_list": [ 00:16:05.805 { 00:16:05.805 "name": "NewBaseBdev", 00:16:05.805 "uuid": "3047dee8-34c8-4f5c-a374-ea0e51266477", 00:16:05.805 "is_configured": true, 00:16:05.805 "data_offset": 2048, 00:16:05.805 "data_size": 63488 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "name": "BaseBdev2", 00:16:05.805 "uuid": "5373e967-d54d-4a3b-b2b9-21523a6189a0", 00:16:05.805 "is_configured": true, 00:16:05.805 "data_offset": 2048, 00:16:05.805 "data_size": 63488 00:16:05.805 }, 00:16:05.805 { 00:16:05.805 "name": "BaseBdev3", 00:16:05.805 "uuid": "59fe0c8e-e796-48f6-9063-4e1da646f345", 00:16:05.805 "is_configured": true, 00:16:05.805 "data_offset": 2048, 00:16:05.805 "data_size": 63488 00:16:05.805 } 00:16:05.805 ] 00:16:05.805 } 00:16:05.805 } 00:16:05.805 }' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:05.805 BaseBdev2 00:16:05.805 BaseBdev3' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.065 [2024-12-09 22:57:21.740000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.065 [2024-12-09 22:57:21.740053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.065 [2024-12-09 22:57:21.740190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.065 [2024-12-09 22:57:21.740264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.065 [2024-12-09 22:57:21.740280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66735 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66735 ']' 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66735 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66735 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66735' 00:16:06.065 killing process with pid 66735 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66735 00:16:06.065 [2024-12-09 22:57:21.790094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.065 22:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66735 00:16:06.324 [2024-12-09 22:57:22.175678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.251 22:57:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:08.251 00:16:08.251 real 0m11.588s 00:16:08.251 user 0m18.002s 00:16:08.251 sys 0m2.109s 00:16:08.251 22:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.251 22:57:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.251 ************************************ 00:16:08.251 END TEST raid_state_function_test_sb 00:16:08.251 ************************************ 00:16:08.251 22:57:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:08.251 22:57:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:08.251 22:57:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.251 22:57:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.251 ************************************ 00:16:08.251 START TEST raid_superblock_test 00:16:08.251 ************************************ 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67366 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67366 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67366 ']' 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.251 22:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.251 [2024-12-09 22:57:23.783205] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:08.251 [2024-12-09 22:57:23.783479] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67366 ] 00:16:08.251 [2024-12-09 22:57:23.964621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.524 [2024-12-09 22:57:24.130799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.782 [2024-12-09 22:57:24.408674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.782 [2024-12-09 22:57:24.408899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 malloc1 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 [2024-12-09 22:57:24.729880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:09.041 [2024-12-09 22:57:24.730061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.041 [2024-12-09 22:57:24.730109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.041 [2024-12-09 22:57:24.730148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.041 [2024-12-09 22:57:24.732921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.041 [2024-12-09 22:57:24.733003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:09.041 pt1 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 malloc2 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 [2024-12-09 22:57:24.794267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.041 [2024-12-09 22:57:24.794342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.041 [2024-12-09 22:57:24.794368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.041 [2024-12-09 22:57:24.794379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.041 [2024-12-09 22:57:24.797005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.041 [2024-12-09 22:57:24.797044] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.041 pt2 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 malloc3 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.041 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 [2024-12-09 22:57:24.875541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:09.041 [2024-12-09 22:57:24.875688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.041 [2024-12-09 22:57:24.875731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:09.041 [2024-12-09 22:57:24.875766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.041 [2024-12-09 22:57:24.878397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.041 [2024-12-09 22:57:24.878535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:09.041 pt3 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.042 [2024-12-09 22:57:24.887599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:09.042 [2024-12-09 22:57:24.890048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.042 [2024-12-09 22:57:24.890131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:09.042 [2024-12-09 22:57:24.890322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:09.042 [2024-12-09 22:57:24.890338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.042 [2024-12-09 22:57:24.890663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:09.042 [2024-12-09 22:57:24.890850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:09.042 [2024-12-09 22:57:24.890860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:09.042 [2024-12-09 22:57:24.891044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.042 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.299 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.299 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.299 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.299 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.299 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.299 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.300 "name": "raid_bdev1", 00:16:09.300 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:09.300 "strip_size_kb": 64, 00:16:09.300 "state": "online", 00:16:09.300 "raid_level": "concat", 00:16:09.300 "superblock": true, 00:16:09.300 "num_base_bdevs": 3, 00:16:09.300 "num_base_bdevs_discovered": 3, 00:16:09.300 "num_base_bdevs_operational": 3, 00:16:09.300 "base_bdevs_list": [ 00:16:09.300 { 00:16:09.300 "name": "pt1", 00:16:09.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.300 "is_configured": true, 00:16:09.300 "data_offset": 2048, 00:16:09.300 "data_size": 63488 00:16:09.300 }, 00:16:09.300 { 00:16:09.300 "name": "pt2", 00:16:09.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.300 "is_configured": true, 00:16:09.300 "data_offset": 2048, 00:16:09.300 "data_size": 63488 00:16:09.300 }, 00:16:09.300 { 00:16:09.300 "name": "pt3", 00:16:09.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.300 "is_configured": true, 00:16:09.300 "data_offset": 2048, 00:16:09.300 "data_size": 63488 00:16:09.300 } 00:16:09.300 ] 00:16:09.300 }' 00:16:09.300 22:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.300 22:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.558 [2024-12-09 22:57:25.343236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:09.558 "name": "raid_bdev1", 00:16:09.558 "aliases": [ 00:16:09.558 "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97" 00:16:09.558 ], 00:16:09.558 "product_name": "Raid Volume", 00:16:09.558 "block_size": 512, 00:16:09.558 "num_blocks": 190464, 00:16:09.558 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:09.558 "assigned_rate_limits": { 00:16:09.558 "rw_ios_per_sec": 0, 00:16:09.558 "rw_mbytes_per_sec": 0, 00:16:09.558 "r_mbytes_per_sec": 0, 00:16:09.558 "w_mbytes_per_sec": 0 00:16:09.558 }, 00:16:09.558 "claimed": false, 00:16:09.558 "zoned": false, 00:16:09.558 "supported_io_types": { 00:16:09.558 "read": true, 00:16:09.558 "write": true, 00:16:09.558 "unmap": true, 00:16:09.558 "flush": true, 00:16:09.558 "reset": true, 00:16:09.558 "nvme_admin": false, 00:16:09.558 "nvme_io": false, 00:16:09.558 "nvme_io_md": false, 00:16:09.558 "write_zeroes": true, 00:16:09.558 "zcopy": false, 00:16:09.558 "get_zone_info": false, 00:16:09.558 "zone_management": false, 00:16:09.558 "zone_append": false, 00:16:09.558 "compare": false, 00:16:09.558 "compare_and_write": false, 00:16:09.558 "abort": false, 00:16:09.558 "seek_hole": false, 00:16:09.558 "seek_data": false, 00:16:09.558 "copy": false, 00:16:09.558 "nvme_iov_md": false 00:16:09.558 }, 00:16:09.558 "memory_domains": [ 00:16:09.558 { 00:16:09.558 "dma_device_id": "system", 00:16:09.558 "dma_device_type": 1 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.558 "dma_device_type": 2 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "dma_device_id": "system", 00:16:09.558 "dma_device_type": 1 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.558 "dma_device_type": 2 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "dma_device_id": "system", 00:16:09.558 "dma_device_type": 1 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.558 "dma_device_type": 2 00:16:09.558 } 00:16:09.558 ], 00:16:09.558 "driver_specific": { 00:16:09.558 "raid": { 00:16:09.558 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:09.558 "strip_size_kb": 64, 00:16:09.558 "state": "online", 00:16:09.558 "raid_level": "concat", 00:16:09.558 "superblock": true, 00:16:09.558 "num_base_bdevs": 3, 00:16:09.558 "num_base_bdevs_discovered": 3, 00:16:09.558 "num_base_bdevs_operational": 3, 00:16:09.558 "base_bdevs_list": [ 00:16:09.558 { 00:16:09.558 "name": "pt1", 00:16:09.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.558 "is_configured": true, 00:16:09.558 "data_offset": 2048, 00:16:09.558 "data_size": 63488 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "name": "pt2", 00:16:09.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.558 "is_configured": true, 00:16:09.558 "data_offset": 2048, 00:16:09.558 "data_size": 63488 00:16:09.558 }, 00:16:09.558 { 00:16:09.558 "name": "pt3", 00:16:09.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.558 "is_configured": true, 00:16:09.558 "data_offset": 2048, 00:16:09.558 "data_size": 63488 00:16:09.558 } 00:16:09.558 ] 00:16:09.558 } 00:16:09.558 } 00:16:09.558 }' 00:16:09.558 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:09.817 pt2 00:16:09.817 pt3' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:09.817 [2024-12-09 22:57:25.618704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0eeec4eb-4c1d-4a1b-a16d-a59888d09e97 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0eeec4eb-4c1d-4a1b-a16d-a59888d09e97 ']' 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.817 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.817 [2024-12-09 22:57:25.670258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.817 [2024-12-09 22:57:25.670315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.817 [2024-12-09 22:57:25.670454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.817 [2024-12-09 22:57:25.670578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.818 [2024-12-09 22:57:25.670599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.076 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 [2024-12-09 22:57:25.826091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:10.077 [2024-12-09 22:57:25.828824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:10.077 [2024-12-09 22:57:25.828894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:10.077 [2024-12-09 22:57:25.828968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:10.077 [2024-12-09 22:57:25.829043] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:10.077 [2024-12-09 22:57:25.829067] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:10.077 [2024-12-09 22:57:25.829089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.077 [2024-12-09 22:57:25.829100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:10.077 request: 00:16:10.077 { 00:16:10.077 "name": "raid_bdev1", 00:16:10.077 "raid_level": "concat", 00:16:10.077 "base_bdevs": [ 00:16:10.077 "malloc1", 00:16:10.077 "malloc2", 00:16:10.077 "malloc3" 00:16:10.077 ], 00:16:10.077 "strip_size_kb": 64, 00:16:10.077 "superblock": false, 00:16:10.077 "method": "bdev_raid_create", 00:16:10.077 "req_id": 1 00:16:10.077 } 00:16:10.077 Got JSON-RPC error response 00:16:10.077 response: 00:16:10.077 { 00:16:10.077 "code": -17, 00:16:10.077 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:10.077 } 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 [2024-12-09 22:57:25.893989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:10.077 [2024-12-09 22:57:25.894182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.077 [2024-12-09 22:57:25.894230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:10.077 [2024-12-09 22:57:25.894270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.077 [2024-12-09 22:57:25.897266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.077 [2024-12-09 22:57:25.897356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:10.077 [2024-12-09 22:57:25.897520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:10.077 [2024-12-09 22:57:25.897620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:10.077 pt1 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.077 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.336 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.336 "name": "raid_bdev1", 00:16:10.336 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:10.336 "strip_size_kb": 64, 00:16:10.336 "state": "configuring", 00:16:10.336 "raid_level": "concat", 00:16:10.336 "superblock": true, 00:16:10.336 "num_base_bdevs": 3, 00:16:10.336 "num_base_bdevs_discovered": 1, 00:16:10.336 "num_base_bdevs_operational": 3, 00:16:10.336 "base_bdevs_list": [ 00:16:10.336 { 00:16:10.336 "name": "pt1", 00:16:10.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.336 "is_configured": true, 00:16:10.336 "data_offset": 2048, 00:16:10.336 "data_size": 63488 00:16:10.336 }, 00:16:10.336 { 00:16:10.336 "name": null, 00:16:10.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.336 "is_configured": false, 00:16:10.336 "data_offset": 2048, 00:16:10.336 "data_size": 63488 00:16:10.336 }, 00:16:10.336 { 00:16:10.336 "name": null, 00:16:10.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.336 "is_configured": false, 00:16:10.336 "data_offset": 2048, 00:16:10.336 "data_size": 63488 00:16:10.336 } 00:16:10.336 ] 00:16:10.336 }' 00:16:10.336 22:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.336 22:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.594 [2024-12-09 22:57:26.361246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.594 [2024-12-09 22:57:26.361365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.594 [2024-12-09 22:57:26.361401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:10.594 [2024-12-09 22:57:26.361414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.594 [2024-12-09 22:57:26.362041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.594 [2024-12-09 22:57:26.362069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.594 [2024-12-09 22:57:26.362192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:10.594 [2024-12-09 22:57:26.362232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.594 pt2 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.594 [2024-12-09 22:57:26.373224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.594 "name": "raid_bdev1", 00:16:10.594 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:10.594 "strip_size_kb": 64, 00:16:10.594 "state": "configuring", 00:16:10.594 "raid_level": "concat", 00:16:10.594 "superblock": true, 00:16:10.594 "num_base_bdevs": 3, 00:16:10.594 "num_base_bdevs_discovered": 1, 00:16:10.594 "num_base_bdevs_operational": 3, 00:16:10.594 "base_bdevs_list": [ 00:16:10.594 { 00:16:10.594 "name": "pt1", 00:16:10.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.594 "is_configured": true, 00:16:10.594 "data_offset": 2048, 00:16:10.594 "data_size": 63488 00:16:10.594 }, 00:16:10.594 { 00:16:10.594 "name": null, 00:16:10.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.594 "is_configured": false, 00:16:10.594 "data_offset": 0, 00:16:10.594 "data_size": 63488 00:16:10.594 }, 00:16:10.594 { 00:16:10.594 "name": null, 00:16:10.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.594 "is_configured": false, 00:16:10.594 "data_offset": 2048, 00:16:10.594 "data_size": 63488 00:16:10.594 } 00:16:10.594 ] 00:16:10.594 }' 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.594 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.161 [2024-12-09 22:57:26.800644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:11.161 [2024-12-09 22:57:26.800842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.161 [2024-12-09 22:57:26.800887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:11.161 [2024-12-09 22:57:26.800929] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.161 [2024-12-09 22:57:26.801609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.161 [2024-12-09 22:57:26.801686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:11.161 [2024-12-09 22:57:26.801832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:11.161 [2024-12-09 22:57:26.801899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:11.161 pt2 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.161 [2024-12-09 22:57:26.812582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:11.161 [2024-12-09 22:57:26.812698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.161 [2024-12-09 22:57:26.812721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:11.161 [2024-12-09 22:57:26.812734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.161 [2024-12-09 22:57:26.813302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.161 [2024-12-09 22:57:26.813346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:11.161 [2024-12-09 22:57:26.813443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:11.161 [2024-12-09 22:57:26.813491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:11.161 [2024-12-09 22:57:26.813638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:11.161 [2024-12-09 22:57:26.813658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.161 [2024-12-09 22:57:26.813958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:11.161 [2024-12-09 22:57:26.814145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:11.161 [2024-12-09 22:57:26.814155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:11.161 [2024-12-09 22:57:26.814329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.161 pt3 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.161 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.162 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.162 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.162 "name": "raid_bdev1", 00:16:11.162 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:11.162 "strip_size_kb": 64, 00:16:11.162 "state": "online", 00:16:11.162 "raid_level": "concat", 00:16:11.162 "superblock": true, 00:16:11.162 "num_base_bdevs": 3, 00:16:11.162 "num_base_bdevs_discovered": 3, 00:16:11.162 "num_base_bdevs_operational": 3, 00:16:11.162 "base_bdevs_list": [ 00:16:11.162 { 00:16:11.162 "name": "pt1", 00:16:11.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.162 "is_configured": true, 00:16:11.162 "data_offset": 2048, 00:16:11.162 "data_size": 63488 00:16:11.162 }, 00:16:11.162 { 00:16:11.162 "name": "pt2", 00:16:11.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.162 "is_configured": true, 00:16:11.162 "data_offset": 2048, 00:16:11.162 "data_size": 63488 00:16:11.162 }, 00:16:11.162 { 00:16:11.162 "name": "pt3", 00:16:11.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.162 "is_configured": true, 00:16:11.162 "data_offset": 2048, 00:16:11.162 "data_size": 63488 00:16:11.162 } 00:16:11.162 ] 00:16:11.162 }' 00:16:11.162 22:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.162 22:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.420 [2024-12-09 22:57:27.228377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.420 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:11.420 "name": "raid_bdev1", 00:16:11.420 "aliases": [ 00:16:11.420 "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97" 00:16:11.420 ], 00:16:11.420 "product_name": "Raid Volume", 00:16:11.420 "block_size": 512, 00:16:11.420 "num_blocks": 190464, 00:16:11.420 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:11.420 "assigned_rate_limits": { 00:16:11.420 "rw_ios_per_sec": 0, 00:16:11.420 "rw_mbytes_per_sec": 0, 00:16:11.420 "r_mbytes_per_sec": 0, 00:16:11.420 "w_mbytes_per_sec": 0 00:16:11.420 }, 00:16:11.420 "claimed": false, 00:16:11.420 "zoned": false, 00:16:11.420 "supported_io_types": { 00:16:11.420 "read": true, 00:16:11.420 "write": true, 00:16:11.420 "unmap": true, 00:16:11.420 "flush": true, 00:16:11.420 "reset": true, 00:16:11.420 "nvme_admin": false, 00:16:11.420 "nvme_io": false, 00:16:11.420 "nvme_io_md": false, 00:16:11.420 "write_zeroes": true, 00:16:11.420 "zcopy": false, 00:16:11.420 "get_zone_info": false, 00:16:11.420 "zone_management": false, 00:16:11.420 "zone_append": false, 00:16:11.420 "compare": false, 00:16:11.420 "compare_and_write": false, 00:16:11.420 "abort": false, 00:16:11.420 "seek_hole": false, 00:16:11.420 "seek_data": false, 00:16:11.420 "copy": false, 00:16:11.420 "nvme_iov_md": false 00:16:11.420 }, 00:16:11.420 "memory_domains": [ 00:16:11.420 { 00:16:11.420 "dma_device_id": "system", 00:16:11.420 "dma_device_type": 1 00:16:11.420 }, 00:16:11.420 { 00:16:11.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.420 "dma_device_type": 2 00:16:11.420 }, 00:16:11.420 { 00:16:11.420 "dma_device_id": "system", 00:16:11.420 "dma_device_type": 1 00:16:11.420 }, 00:16:11.420 { 00:16:11.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.420 "dma_device_type": 2 00:16:11.420 }, 00:16:11.420 { 00:16:11.420 "dma_device_id": "system", 00:16:11.420 "dma_device_type": 1 00:16:11.420 }, 00:16:11.420 { 00:16:11.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.420 "dma_device_type": 2 00:16:11.421 } 00:16:11.421 ], 00:16:11.421 "driver_specific": { 00:16:11.421 "raid": { 00:16:11.421 "uuid": "0eeec4eb-4c1d-4a1b-a16d-a59888d09e97", 00:16:11.421 "strip_size_kb": 64, 00:16:11.421 "state": "online", 00:16:11.421 "raid_level": "concat", 00:16:11.421 "superblock": true, 00:16:11.421 "num_base_bdevs": 3, 00:16:11.421 "num_base_bdevs_discovered": 3, 00:16:11.421 "num_base_bdevs_operational": 3, 00:16:11.421 "base_bdevs_list": [ 00:16:11.421 { 00:16:11.421 "name": "pt1", 00:16:11.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.421 "is_configured": true, 00:16:11.421 "data_offset": 2048, 00:16:11.421 "data_size": 63488 00:16:11.421 }, 00:16:11.421 { 00:16:11.421 "name": "pt2", 00:16:11.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.421 "is_configured": true, 00:16:11.421 "data_offset": 2048, 00:16:11.421 "data_size": 63488 00:16:11.421 }, 00:16:11.421 { 00:16:11.421 "name": "pt3", 00:16:11.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.421 "is_configured": true, 00:16:11.421 "data_offset": 2048, 00:16:11.421 "data_size": 63488 00:16:11.421 } 00:16:11.421 ] 00:16:11.421 } 00:16:11.421 } 00:16:11.421 }' 00:16:11.421 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:11.680 pt2 00:16:11.680 pt3' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 [2024-12-09 22:57:27.496000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0eeec4eb-4c1d-4a1b-a16d-a59888d09e97 '!=' 0eeec4eb-4c1d-4a1b-a16d-a59888d09e97 ']' 00:16:11.680 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67366 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67366 ']' 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67366 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67366 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67366' 00:16:11.938 killing process with pid 67366 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67366 00:16:11.938 22:57:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67366 00:16:11.938 [2024-12-09 22:57:27.579208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.938 [2024-12-09 22:57:27.579381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.938 [2024-12-09 22:57:27.579596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.938 [2024-12-09 22:57:27.579652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:12.196 [2024-12-09 22:57:27.977586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.099 22:57:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:14.099 00:16:14.099 real 0m5.767s 00:16:14.099 user 0m7.908s 00:16:14.099 sys 0m1.076s 00:16:14.099 22:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.099 22:57:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.099 ************************************ 00:16:14.099 END TEST raid_superblock_test 00:16:14.099 ************************************ 00:16:14.099 22:57:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:16:14.099 22:57:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:14.099 22:57:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.099 22:57:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.099 ************************************ 00:16:14.099 START TEST raid_read_error_test 00:16:14.099 ************************************ 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ilKmSWcpTb 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67629 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67629 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67629 ']' 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:14.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.099 22:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.099 [2024-12-09 22:57:29.617922] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:14.099 [2024-12-09 22:57:29.618146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67629 ] 00:16:14.099 [2024-12-09 22:57:29.796115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.357 [2024-12-09 22:57:29.957178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.620 [2024-12-09 22:57:30.235920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.620 [2024-12-09 22:57:30.236116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 BaseBdev1_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 true 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 [2024-12-09 22:57:30.577858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:14.881 [2024-12-09 22:57:30.578066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.881 [2024-12-09 22:57:30.578109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:14.881 [2024-12-09 22:57:30.578127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.881 [2024-12-09 22:57:30.581382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.881 [2024-12-09 22:57:30.581520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.881 BaseBdev1 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 BaseBdev2_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 true 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 [2024-12-09 22:57:30.658771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:14.881 [2024-12-09 22:57:30.658874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.881 [2024-12-09 22:57:30.658904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:14.881 [2024-12-09 22:57:30.658918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.881 [2024-12-09 22:57:30.662020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.881 [2024-12-09 22:57:30.662081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:14.881 BaseBdev2 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 BaseBdev3_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.881 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.139 true 00:16:15.139 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.139 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:15.139 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.139 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.139 [2024-12-09 22:57:30.752148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:15.139 [2024-12-09 22:57:30.752348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.140 [2024-12-09 22:57:30.752384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:15.140 [2024-12-09 22:57:30.752411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.140 [2024-12-09 22:57:30.755540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.140 [2024-12-09 22:57:30.755598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:15.140 BaseBdev3 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.140 [2024-12-09 22:57:30.764527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.140 [2024-12-09 22:57:30.767144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.140 [2024-12-09 22:57:30.767352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.140 [2024-12-09 22:57:30.767672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:15.140 [2024-12-09 22:57:30.767691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.140 [2024-12-09 22:57:30.768071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:15.140 [2024-12-09 22:57:30.768295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:15.140 [2024-12-09 22:57:30.768312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:15.140 [2024-12-09 22:57:30.768649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.140 "name": "raid_bdev1", 00:16:15.140 "uuid": "777c0a7f-838b-4fd6-acef-b3ea3237620a", 00:16:15.140 "strip_size_kb": 64, 00:16:15.140 "state": "online", 00:16:15.140 "raid_level": "concat", 00:16:15.140 "superblock": true, 00:16:15.140 "num_base_bdevs": 3, 00:16:15.140 "num_base_bdevs_discovered": 3, 00:16:15.140 "num_base_bdevs_operational": 3, 00:16:15.140 "base_bdevs_list": [ 00:16:15.140 { 00:16:15.140 "name": "BaseBdev1", 00:16:15.140 "uuid": "6a4297b9-949d-5827-aa97-392e9a609002", 00:16:15.140 "is_configured": true, 00:16:15.140 "data_offset": 2048, 00:16:15.140 "data_size": 63488 00:16:15.140 }, 00:16:15.140 { 00:16:15.140 "name": "BaseBdev2", 00:16:15.140 "uuid": "d51732e2-30fd-522b-8a52-acfe0e54bf13", 00:16:15.140 "is_configured": true, 00:16:15.140 "data_offset": 2048, 00:16:15.140 "data_size": 63488 00:16:15.140 }, 00:16:15.140 { 00:16:15.140 "name": "BaseBdev3", 00:16:15.140 "uuid": "2e8d5dec-f290-5547-b8df-3fceaa0751f4", 00:16:15.140 "is_configured": true, 00:16:15.140 "data_offset": 2048, 00:16:15.140 "data_size": 63488 00:16:15.140 } 00:16:15.140 ] 00:16:15.140 }' 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.140 22:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.398 22:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:15.398 22:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:15.656 [2024-12-09 22:57:31.325700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.595 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.596 "name": "raid_bdev1", 00:16:16.596 "uuid": "777c0a7f-838b-4fd6-acef-b3ea3237620a", 00:16:16.596 "strip_size_kb": 64, 00:16:16.596 "state": "online", 00:16:16.596 "raid_level": "concat", 00:16:16.596 "superblock": true, 00:16:16.596 "num_base_bdevs": 3, 00:16:16.596 "num_base_bdevs_discovered": 3, 00:16:16.596 "num_base_bdevs_operational": 3, 00:16:16.596 "base_bdevs_list": [ 00:16:16.596 { 00:16:16.596 "name": "BaseBdev1", 00:16:16.596 "uuid": "6a4297b9-949d-5827-aa97-392e9a609002", 00:16:16.596 "is_configured": true, 00:16:16.596 "data_offset": 2048, 00:16:16.596 "data_size": 63488 00:16:16.596 }, 00:16:16.596 { 00:16:16.596 "name": "BaseBdev2", 00:16:16.596 "uuid": "d51732e2-30fd-522b-8a52-acfe0e54bf13", 00:16:16.596 "is_configured": true, 00:16:16.596 "data_offset": 2048, 00:16:16.596 "data_size": 63488 00:16:16.596 }, 00:16:16.596 { 00:16:16.596 "name": "BaseBdev3", 00:16:16.596 "uuid": "2e8d5dec-f290-5547-b8df-3fceaa0751f4", 00:16:16.596 "is_configured": true, 00:16:16.596 "data_offset": 2048, 00:16:16.596 "data_size": 63488 00:16:16.596 } 00:16:16.596 ] 00:16:16.596 }' 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.596 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.878 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.878 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.878 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.878 [2024-12-09 22:57:32.684705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.878 [2024-12-09 22:57:32.684834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.878 [2024-12-09 22:57:32.687738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.878 [2024-12-09 22:57:32.687834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.878 [2024-12-09 22:57:32.687897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.878 [2024-12-09 22:57:32.687940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:16.878 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.878 { 00:16:16.878 "results": [ 00:16:16.878 { 00:16:16.878 "job": "raid_bdev1", 00:16:16.878 "core_mask": "0x1", 00:16:16.878 "workload": "randrw", 00:16:16.878 "percentage": 50, 00:16:16.878 "status": "finished", 00:16:16.878 "queue_depth": 1, 00:16:16.878 "io_size": 131072, 00:16:16.878 "runtime": 1.359152, 00:16:16.878 "iops": 11433.599773976715, 00:16:16.878 "mibps": 1429.1999717470894, 00:16:16.879 "io_failed": 1, 00:16:16.879 "io_timeout": 0, 00:16:16.879 "avg_latency_us": 122.50434514816281, 00:16:16.879 "min_latency_us": 28.841921397379913, 00:16:16.879 "max_latency_us": 1523.926637554585 00:16:16.879 } 00:16:16.879 ], 00:16:16.879 "core_count": 1 00:16:16.879 } 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67629 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67629 ']' 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67629 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67629 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67629' 00:16:16.879 killing process with pid 67629 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67629 00:16:16.879 [2024-12-09 22:57:32.725363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.879 22:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67629 00:16:17.454 [2024-12-09 22:57:33.001811] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ilKmSWcpTb 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:16:18.837 00:16:18.837 real 0m4.892s 00:16:18.837 user 0m5.658s 00:16:18.837 sys 0m0.682s 00:16:18.837 ************************************ 00:16:18.837 END TEST raid_read_error_test 00:16:18.837 ************************************ 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.837 22:57:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.837 22:57:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:16:18.837 22:57:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:18.837 22:57:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.837 22:57:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.837 ************************************ 00:16:18.837 START TEST raid_write_error_test 00:16:18.837 ************************************ 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5Gn1wE9dev 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67777 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67777 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67777 ']' 00:16:18.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.837 22:57:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.837 [2024-12-09 22:57:34.581540] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:18.837 [2024-12-09 22:57:34.581681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67777 ] 00:16:19.096 [2024-12-09 22:57:34.766220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.096 [2024-12-09 22:57:34.913938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.356 [2024-12-09 22:57:35.170014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.356 [2024-12-09 22:57:35.170111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.924 BaseBdev1_malloc 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.924 true 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.924 [2024-12-09 22:57:35.560698] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:19.924 [2024-12-09 22:57:35.560768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.924 [2024-12-09 22:57:35.560790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:19.924 [2024-12-09 22:57:35.560802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.924 [2024-12-09 22:57:35.563235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.924 [2024-12-09 22:57:35.563281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:19.924 BaseBdev1 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.924 BaseBdev2_malloc 00:16:19.924 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 true 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 [2024-12-09 22:57:35.639890] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:19.925 [2024-12-09 22:57:35.639966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.925 [2024-12-09 22:57:35.639985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:19.925 [2024-12-09 22:57:35.639997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.925 [2024-12-09 22:57:35.642485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.925 [2024-12-09 22:57:35.642608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:19.925 BaseBdev2 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 BaseBdev3_malloc 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 true 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 [2024-12-09 22:57:35.728612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:19.925 [2024-12-09 22:57:35.728763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.925 [2024-12-09 22:57:35.728788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:19.925 [2024-12-09 22:57:35.728802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.925 [2024-12-09 22:57:35.731308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.925 [2024-12-09 22:57:35.731350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:19.925 BaseBdev3 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 [2024-12-09 22:57:35.740694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.925 [2024-12-09 22:57:35.743097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.925 [2024-12-09 22:57:35.743177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.925 [2024-12-09 22:57:35.743405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:19.925 [2024-12-09 22:57:35.743418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.925 [2024-12-09 22:57:35.743706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:19.925 [2024-12-09 22:57:35.743880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:19.925 [2024-12-09 22:57:35.743896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:19.925 [2024-12-09 22:57:35.744051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.925 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.185 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.185 "name": "raid_bdev1", 00:16:20.185 "uuid": "b7d030c4-3c01-4194-b899-f7296b15ff3f", 00:16:20.185 "strip_size_kb": 64, 00:16:20.185 "state": "online", 00:16:20.185 "raid_level": "concat", 00:16:20.185 "superblock": true, 00:16:20.185 "num_base_bdevs": 3, 00:16:20.185 "num_base_bdevs_discovered": 3, 00:16:20.185 "num_base_bdevs_operational": 3, 00:16:20.185 "base_bdevs_list": [ 00:16:20.185 { 00:16:20.185 "name": "BaseBdev1", 00:16:20.185 "uuid": "baeffd89-7afd-58d6-ac58-107dc5152500", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.185 "data_size": 63488 00:16:20.185 }, 00:16:20.185 { 00:16:20.185 "name": "BaseBdev2", 00:16:20.185 "uuid": "c0cf0b78-5426-51ed-a243-56bc090a1ddb", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.185 "data_size": 63488 00:16:20.185 }, 00:16:20.185 { 00:16:20.185 "name": "BaseBdev3", 00:16:20.185 "uuid": "2c26e80e-eb84-5398-b465-b80b4053844d", 00:16:20.185 "is_configured": true, 00:16:20.185 "data_offset": 2048, 00:16:20.185 "data_size": 63488 00:16:20.185 } 00:16:20.185 ] 00:16:20.185 }' 00:16:20.185 22:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.185 22:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.444 22:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:20.444 22:57:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:20.444 [2024-12-09 22:57:36.285528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.422 "name": "raid_bdev1", 00:16:21.422 "uuid": "b7d030c4-3c01-4194-b899-f7296b15ff3f", 00:16:21.422 "strip_size_kb": 64, 00:16:21.422 "state": "online", 00:16:21.422 "raid_level": "concat", 00:16:21.422 "superblock": true, 00:16:21.422 "num_base_bdevs": 3, 00:16:21.422 "num_base_bdevs_discovered": 3, 00:16:21.422 "num_base_bdevs_operational": 3, 00:16:21.422 "base_bdevs_list": [ 00:16:21.422 { 00:16:21.422 "name": "BaseBdev1", 00:16:21.422 "uuid": "baeffd89-7afd-58d6-ac58-107dc5152500", 00:16:21.422 "is_configured": true, 00:16:21.422 "data_offset": 2048, 00:16:21.422 "data_size": 63488 00:16:21.422 }, 00:16:21.422 { 00:16:21.422 "name": "BaseBdev2", 00:16:21.422 "uuid": "c0cf0b78-5426-51ed-a243-56bc090a1ddb", 00:16:21.422 "is_configured": true, 00:16:21.422 "data_offset": 2048, 00:16:21.422 "data_size": 63488 00:16:21.422 }, 00:16:21.422 { 00:16:21.422 "name": "BaseBdev3", 00:16:21.422 "uuid": "2c26e80e-eb84-5398-b465-b80b4053844d", 00:16:21.422 "is_configured": true, 00:16:21.422 "data_offset": 2048, 00:16:21.422 "data_size": 63488 00:16:21.422 } 00:16:21.422 ] 00:16:21.422 }' 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.422 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.991 [2024-12-09 22:57:37.638681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.991 [2024-12-09 22:57:37.638820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.991 [2024-12-09 22:57:37.641907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.991 [2024-12-09 22:57:37.642001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.991 [2024-12-09 22:57:37.642062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.991 [2024-12-09 22:57:37.642141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:21.991 { 00:16:21.991 "results": [ 00:16:21.991 { 00:16:21.991 "job": "raid_bdev1", 00:16:21.991 "core_mask": "0x1", 00:16:21.991 "workload": "randrw", 00:16:21.991 "percentage": 50, 00:16:21.991 "status": "finished", 00:16:21.991 "queue_depth": 1, 00:16:21.991 "io_size": 131072, 00:16:21.991 "runtime": 1.353751, 00:16:21.991 "iops": 12264.441540578733, 00:16:21.991 "mibps": 1533.0551925723416, 00:16:21.991 "io_failed": 1, 00:16:21.991 "io_timeout": 0, 00:16:21.991 "avg_latency_us": 114.39927202263041, 00:16:21.991 "min_latency_us": 28.39475982532751, 00:16:21.991 "max_latency_us": 1574.0087336244542 00:16:21.991 } 00:16:21.991 ], 00:16:21.991 "core_count": 1 00:16:21.991 } 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67777 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67777 ']' 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67777 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67777 00:16:21.991 killing process with pid 67777 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67777' 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67777 00:16:21.991 [2024-12-09 22:57:37.685452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.991 22:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67777 00:16:22.250 [2024-12-09 22:57:37.957054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5Gn1wE9dev 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:16:23.628 00:16:23.628 real 0m4.883s 00:16:23.628 user 0m5.681s 00:16:23.628 sys 0m0.685s 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.628 22:57:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.628 ************************************ 00:16:23.628 END TEST raid_write_error_test 00:16:23.628 ************************************ 00:16:23.628 22:57:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:23.628 22:57:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:23.628 22:57:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:23.628 22:57:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.628 22:57:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.628 ************************************ 00:16:23.628 START TEST raid_state_function_test 00:16:23.628 ************************************ 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.628 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:23.629 Process raid pid: 67922 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67922 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67922' 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67922 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67922 ']' 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.629 22:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.888 [2024-12-09 22:57:39.520192] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:23.888 [2024-12-09 22:57:39.520324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.888 [2024-12-09 22:57:39.699002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.147 [2024-12-09 22:57:39.857427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.406 [2024-12-09 22:57:40.119218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.406 [2024-12-09 22:57:40.119281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.665 [2024-12-09 22:57:40.415094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.665 [2024-12-09 22:57:40.415183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.665 [2024-12-09 22:57:40.415196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.665 [2024-12-09 22:57:40.415207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.665 [2024-12-09 22:57:40.415214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.665 [2024-12-09 22:57:40.415224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.665 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.666 "name": "Existed_Raid", 00:16:24.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.666 "strip_size_kb": 0, 00:16:24.666 "state": "configuring", 00:16:24.666 "raid_level": "raid1", 00:16:24.666 "superblock": false, 00:16:24.666 "num_base_bdevs": 3, 00:16:24.666 "num_base_bdevs_discovered": 0, 00:16:24.666 "num_base_bdevs_operational": 3, 00:16:24.666 "base_bdevs_list": [ 00:16:24.666 { 00:16:24.666 "name": "BaseBdev1", 00:16:24.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.666 "is_configured": false, 00:16:24.666 "data_offset": 0, 00:16:24.666 "data_size": 0 00:16:24.666 }, 00:16:24.666 { 00:16:24.666 "name": "BaseBdev2", 00:16:24.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.666 "is_configured": false, 00:16:24.666 "data_offset": 0, 00:16:24.666 "data_size": 0 00:16:24.666 }, 00:16:24.666 { 00:16:24.666 "name": "BaseBdev3", 00:16:24.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.666 "is_configured": false, 00:16:24.666 "data_offset": 0, 00:16:24.666 "data_size": 0 00:16:24.666 } 00:16:24.666 ] 00:16:24.666 }' 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.666 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 [2024-12-09 22:57:40.906268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.235 [2024-12-09 22:57:40.906418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 [2024-12-09 22:57:40.918190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.235 [2024-12-09 22:57:40.918314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.235 [2024-12-09 22:57:40.918347] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.235 [2024-12-09 22:57:40.918374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.235 [2024-12-09 22:57:40.918395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.235 [2024-12-09 22:57:40.918421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 [2024-12-09 22:57:40.977045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.235 BaseBdev1 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.235 22:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 [ 00:16:25.235 { 00:16:25.235 "name": "BaseBdev1", 00:16:25.235 "aliases": [ 00:16:25.235 "28850d44-e383-4f91-ba01-0cbd129c818f" 00:16:25.235 ], 00:16:25.235 "product_name": "Malloc disk", 00:16:25.235 "block_size": 512, 00:16:25.235 "num_blocks": 65536, 00:16:25.235 "uuid": "28850d44-e383-4f91-ba01-0cbd129c818f", 00:16:25.235 "assigned_rate_limits": { 00:16:25.235 "rw_ios_per_sec": 0, 00:16:25.235 "rw_mbytes_per_sec": 0, 00:16:25.235 "r_mbytes_per_sec": 0, 00:16:25.235 "w_mbytes_per_sec": 0 00:16:25.235 }, 00:16:25.235 "claimed": true, 00:16:25.235 "claim_type": "exclusive_write", 00:16:25.235 "zoned": false, 00:16:25.235 "supported_io_types": { 00:16:25.235 "read": true, 00:16:25.235 "write": true, 00:16:25.235 "unmap": true, 00:16:25.235 "flush": true, 00:16:25.235 "reset": true, 00:16:25.235 "nvme_admin": false, 00:16:25.235 "nvme_io": false, 00:16:25.235 "nvme_io_md": false, 00:16:25.235 "write_zeroes": true, 00:16:25.235 "zcopy": true, 00:16:25.235 "get_zone_info": false, 00:16:25.235 "zone_management": false, 00:16:25.235 "zone_append": false, 00:16:25.235 "compare": false, 00:16:25.235 "compare_and_write": false, 00:16:25.235 "abort": true, 00:16:25.235 "seek_hole": false, 00:16:25.235 "seek_data": false, 00:16:25.235 "copy": true, 00:16:25.235 "nvme_iov_md": false 00:16:25.235 }, 00:16:25.235 "memory_domains": [ 00:16:25.235 { 00:16:25.235 "dma_device_id": "system", 00:16:25.235 "dma_device_type": 1 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.236 "dma_device_type": 2 00:16:25.236 } 00:16:25.236 ], 00:16:25.236 "driver_specific": {} 00:16:25.236 } 00:16:25.236 ] 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.236 "name": "Existed_Raid", 00:16:25.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.236 "strip_size_kb": 0, 00:16:25.236 "state": "configuring", 00:16:25.236 "raid_level": "raid1", 00:16:25.236 "superblock": false, 00:16:25.236 "num_base_bdevs": 3, 00:16:25.236 "num_base_bdevs_discovered": 1, 00:16:25.236 "num_base_bdevs_operational": 3, 00:16:25.236 "base_bdevs_list": [ 00:16:25.236 { 00:16:25.236 "name": "BaseBdev1", 00:16:25.236 "uuid": "28850d44-e383-4f91-ba01-0cbd129c818f", 00:16:25.236 "is_configured": true, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 65536 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": "BaseBdev2", 00:16:25.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.236 "is_configured": false, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 0 00:16:25.236 }, 00:16:25.236 { 00:16:25.236 "name": "BaseBdev3", 00:16:25.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.236 "is_configured": false, 00:16:25.236 "data_offset": 0, 00:16:25.236 "data_size": 0 00:16:25.236 } 00:16:25.236 ] 00:16:25.236 }' 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.236 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 [2024-12-09 22:57:41.480356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.805 [2024-12-09 22:57:41.480463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 [2024-12-09 22:57:41.492390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.805 [2024-12-09 22:57:41.494885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.805 [2024-12-09 22:57:41.494940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.805 [2024-12-09 22:57:41.494952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.805 [2024-12-09 22:57:41.494962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.805 "name": "Existed_Raid", 00:16:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.805 "strip_size_kb": 0, 00:16:25.805 "state": "configuring", 00:16:25.805 "raid_level": "raid1", 00:16:25.805 "superblock": false, 00:16:25.805 "num_base_bdevs": 3, 00:16:25.805 "num_base_bdevs_discovered": 1, 00:16:25.805 "num_base_bdevs_operational": 3, 00:16:25.805 "base_bdevs_list": [ 00:16:25.805 { 00:16:25.805 "name": "BaseBdev1", 00:16:25.805 "uuid": "28850d44-e383-4f91-ba01-0cbd129c818f", 00:16:25.805 "is_configured": true, 00:16:25.805 "data_offset": 0, 00:16:25.805 "data_size": 65536 00:16:25.805 }, 00:16:25.805 { 00:16:25.805 "name": "BaseBdev2", 00:16:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.805 "is_configured": false, 00:16:25.805 "data_offset": 0, 00:16:25.805 "data_size": 0 00:16:25.805 }, 00:16:25.805 { 00:16:25.805 "name": "BaseBdev3", 00:16:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.805 "is_configured": false, 00:16:25.805 "data_offset": 0, 00:16:25.805 "data_size": 0 00:16:25.805 } 00:16:25.805 ] 00:16:25.805 }' 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.805 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 22:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.375 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.375 22:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 [2024-12-09 22:57:42.000621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.375 BaseBdev2 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 [ 00:16:26.375 { 00:16:26.375 "name": "BaseBdev2", 00:16:26.375 "aliases": [ 00:16:26.375 "74a962aa-71eb-465a-bbd9-ee078c9e3383" 00:16:26.375 ], 00:16:26.375 "product_name": "Malloc disk", 00:16:26.375 "block_size": 512, 00:16:26.375 "num_blocks": 65536, 00:16:26.375 "uuid": "74a962aa-71eb-465a-bbd9-ee078c9e3383", 00:16:26.375 "assigned_rate_limits": { 00:16:26.375 "rw_ios_per_sec": 0, 00:16:26.375 "rw_mbytes_per_sec": 0, 00:16:26.375 "r_mbytes_per_sec": 0, 00:16:26.375 "w_mbytes_per_sec": 0 00:16:26.375 }, 00:16:26.375 "claimed": true, 00:16:26.375 "claim_type": "exclusive_write", 00:16:26.375 "zoned": false, 00:16:26.375 "supported_io_types": { 00:16:26.375 "read": true, 00:16:26.375 "write": true, 00:16:26.375 "unmap": true, 00:16:26.375 "flush": true, 00:16:26.375 "reset": true, 00:16:26.375 "nvme_admin": false, 00:16:26.375 "nvme_io": false, 00:16:26.375 "nvme_io_md": false, 00:16:26.375 "write_zeroes": true, 00:16:26.375 "zcopy": true, 00:16:26.375 "get_zone_info": false, 00:16:26.375 "zone_management": false, 00:16:26.375 "zone_append": false, 00:16:26.375 "compare": false, 00:16:26.375 "compare_and_write": false, 00:16:26.375 "abort": true, 00:16:26.375 "seek_hole": false, 00:16:26.375 "seek_data": false, 00:16:26.375 "copy": true, 00:16:26.375 "nvme_iov_md": false 00:16:26.375 }, 00:16:26.375 "memory_domains": [ 00:16:26.375 { 00:16:26.375 "dma_device_id": "system", 00:16:26.375 "dma_device_type": 1 00:16:26.375 }, 00:16:26.375 { 00:16:26.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.375 "dma_device_type": 2 00:16:26.375 } 00:16:26.375 ], 00:16:26.375 "driver_specific": {} 00:16:26.375 } 00:16:26.375 ] 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.375 "name": "Existed_Raid", 00:16:26.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.375 "strip_size_kb": 0, 00:16:26.375 "state": "configuring", 00:16:26.375 "raid_level": "raid1", 00:16:26.375 "superblock": false, 00:16:26.375 "num_base_bdevs": 3, 00:16:26.375 "num_base_bdevs_discovered": 2, 00:16:26.375 "num_base_bdevs_operational": 3, 00:16:26.375 "base_bdevs_list": [ 00:16:26.375 { 00:16:26.375 "name": "BaseBdev1", 00:16:26.375 "uuid": "28850d44-e383-4f91-ba01-0cbd129c818f", 00:16:26.375 "is_configured": true, 00:16:26.375 "data_offset": 0, 00:16:26.375 "data_size": 65536 00:16:26.375 }, 00:16:26.375 { 00:16:26.375 "name": "BaseBdev2", 00:16:26.375 "uuid": "74a962aa-71eb-465a-bbd9-ee078c9e3383", 00:16:26.375 "is_configured": true, 00:16:26.375 "data_offset": 0, 00:16:26.375 "data_size": 65536 00:16:26.375 }, 00:16:26.375 { 00:16:26.375 "name": "BaseBdev3", 00:16:26.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.375 "is_configured": false, 00:16:26.375 "data_offset": 0, 00:16:26.375 "data_size": 0 00:16:26.375 } 00:16:26.375 ] 00:16:26.375 }' 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.375 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.944 [2024-12-09 22:57:42.615295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.944 [2024-12-09 22:57:42.615539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.944 [2024-12-09 22:57:42.615579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:26.944 [2024-12-09 22:57:42.615960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:26.944 [2024-12-09 22:57:42.616223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.944 [2024-12-09 22:57:42.616269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.944 [2024-12-09 22:57:42.616656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.944 BaseBdev3 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.944 [ 00:16:26.944 { 00:16:26.944 "name": "BaseBdev3", 00:16:26.944 "aliases": [ 00:16:26.944 "60cd0049-dc9c-4665-8eb6-e690cf7e761b" 00:16:26.944 ], 00:16:26.944 "product_name": "Malloc disk", 00:16:26.944 "block_size": 512, 00:16:26.944 "num_blocks": 65536, 00:16:26.944 "uuid": "60cd0049-dc9c-4665-8eb6-e690cf7e761b", 00:16:26.944 "assigned_rate_limits": { 00:16:26.944 "rw_ios_per_sec": 0, 00:16:26.944 "rw_mbytes_per_sec": 0, 00:16:26.944 "r_mbytes_per_sec": 0, 00:16:26.944 "w_mbytes_per_sec": 0 00:16:26.944 }, 00:16:26.944 "claimed": true, 00:16:26.944 "claim_type": "exclusive_write", 00:16:26.944 "zoned": false, 00:16:26.944 "supported_io_types": { 00:16:26.944 "read": true, 00:16:26.944 "write": true, 00:16:26.944 "unmap": true, 00:16:26.944 "flush": true, 00:16:26.944 "reset": true, 00:16:26.944 "nvme_admin": false, 00:16:26.944 "nvme_io": false, 00:16:26.944 "nvme_io_md": false, 00:16:26.944 "write_zeroes": true, 00:16:26.944 "zcopy": true, 00:16:26.944 "get_zone_info": false, 00:16:26.944 "zone_management": false, 00:16:26.944 "zone_append": false, 00:16:26.944 "compare": false, 00:16:26.944 "compare_and_write": false, 00:16:26.944 "abort": true, 00:16:26.944 "seek_hole": false, 00:16:26.944 "seek_data": false, 00:16:26.944 "copy": true, 00:16:26.944 "nvme_iov_md": false 00:16:26.944 }, 00:16:26.944 "memory_domains": [ 00:16:26.944 { 00:16:26.944 "dma_device_id": "system", 00:16:26.944 "dma_device_type": 1 00:16:26.944 }, 00:16:26.944 { 00:16:26.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.944 "dma_device_type": 2 00:16:26.944 } 00:16:26.944 ], 00:16:26.944 "driver_specific": {} 00:16:26.944 } 00:16:26.944 ] 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.944 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.945 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.945 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.945 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.945 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.945 "name": "Existed_Raid", 00:16:26.945 "uuid": "231008ce-a4db-452e-a72f-4fd86965c68f", 00:16:26.945 "strip_size_kb": 0, 00:16:26.945 "state": "online", 00:16:26.945 "raid_level": "raid1", 00:16:26.945 "superblock": false, 00:16:26.945 "num_base_bdevs": 3, 00:16:26.945 "num_base_bdevs_discovered": 3, 00:16:26.945 "num_base_bdevs_operational": 3, 00:16:26.945 "base_bdevs_list": [ 00:16:26.945 { 00:16:26.945 "name": "BaseBdev1", 00:16:26.945 "uuid": "28850d44-e383-4f91-ba01-0cbd129c818f", 00:16:26.945 "is_configured": true, 00:16:26.945 "data_offset": 0, 00:16:26.945 "data_size": 65536 00:16:26.945 }, 00:16:26.945 { 00:16:26.945 "name": "BaseBdev2", 00:16:26.945 "uuid": "74a962aa-71eb-465a-bbd9-ee078c9e3383", 00:16:26.945 "is_configured": true, 00:16:26.945 "data_offset": 0, 00:16:26.945 "data_size": 65536 00:16:26.945 }, 00:16:26.945 { 00:16:26.945 "name": "BaseBdev3", 00:16:26.945 "uuid": "60cd0049-dc9c-4665-8eb6-e690cf7e761b", 00:16:26.945 "is_configured": true, 00:16:26.945 "data_offset": 0, 00:16:26.945 "data_size": 65536 00:16:26.945 } 00:16:26.945 ] 00:16:26.945 }' 00:16:26.945 22:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.945 22:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.513 [2024-12-09 22:57:43.095014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.513 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.513 "name": "Existed_Raid", 00:16:27.513 "aliases": [ 00:16:27.513 "231008ce-a4db-452e-a72f-4fd86965c68f" 00:16:27.513 ], 00:16:27.513 "product_name": "Raid Volume", 00:16:27.513 "block_size": 512, 00:16:27.513 "num_blocks": 65536, 00:16:27.513 "uuid": "231008ce-a4db-452e-a72f-4fd86965c68f", 00:16:27.513 "assigned_rate_limits": { 00:16:27.513 "rw_ios_per_sec": 0, 00:16:27.513 "rw_mbytes_per_sec": 0, 00:16:27.513 "r_mbytes_per_sec": 0, 00:16:27.513 "w_mbytes_per_sec": 0 00:16:27.513 }, 00:16:27.513 "claimed": false, 00:16:27.513 "zoned": false, 00:16:27.513 "supported_io_types": { 00:16:27.513 "read": true, 00:16:27.513 "write": true, 00:16:27.513 "unmap": false, 00:16:27.513 "flush": false, 00:16:27.513 "reset": true, 00:16:27.513 "nvme_admin": false, 00:16:27.513 "nvme_io": false, 00:16:27.513 "nvme_io_md": false, 00:16:27.513 "write_zeroes": true, 00:16:27.513 "zcopy": false, 00:16:27.513 "get_zone_info": false, 00:16:27.513 "zone_management": false, 00:16:27.513 "zone_append": false, 00:16:27.513 "compare": false, 00:16:27.513 "compare_and_write": false, 00:16:27.513 "abort": false, 00:16:27.513 "seek_hole": false, 00:16:27.513 "seek_data": false, 00:16:27.513 "copy": false, 00:16:27.513 "nvme_iov_md": false 00:16:27.513 }, 00:16:27.513 "memory_domains": [ 00:16:27.513 { 00:16:27.513 "dma_device_id": "system", 00:16:27.513 "dma_device_type": 1 00:16:27.513 }, 00:16:27.513 { 00:16:27.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.513 "dma_device_type": 2 00:16:27.513 }, 00:16:27.513 { 00:16:27.513 "dma_device_id": "system", 00:16:27.513 "dma_device_type": 1 00:16:27.513 }, 00:16:27.513 { 00:16:27.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.513 "dma_device_type": 2 00:16:27.513 }, 00:16:27.513 { 00:16:27.513 "dma_device_id": "system", 00:16:27.513 "dma_device_type": 1 00:16:27.513 }, 00:16:27.514 { 00:16:27.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.514 "dma_device_type": 2 00:16:27.514 } 00:16:27.514 ], 00:16:27.514 "driver_specific": { 00:16:27.514 "raid": { 00:16:27.514 "uuid": "231008ce-a4db-452e-a72f-4fd86965c68f", 00:16:27.514 "strip_size_kb": 0, 00:16:27.514 "state": "online", 00:16:27.514 "raid_level": "raid1", 00:16:27.514 "superblock": false, 00:16:27.514 "num_base_bdevs": 3, 00:16:27.514 "num_base_bdevs_discovered": 3, 00:16:27.514 "num_base_bdevs_operational": 3, 00:16:27.514 "base_bdevs_list": [ 00:16:27.514 { 00:16:27.514 "name": "BaseBdev1", 00:16:27.514 "uuid": "28850d44-e383-4f91-ba01-0cbd129c818f", 00:16:27.514 "is_configured": true, 00:16:27.514 "data_offset": 0, 00:16:27.514 "data_size": 65536 00:16:27.514 }, 00:16:27.514 { 00:16:27.514 "name": "BaseBdev2", 00:16:27.514 "uuid": "74a962aa-71eb-465a-bbd9-ee078c9e3383", 00:16:27.514 "is_configured": true, 00:16:27.514 "data_offset": 0, 00:16:27.514 "data_size": 65536 00:16:27.514 }, 00:16:27.514 { 00:16:27.514 "name": "BaseBdev3", 00:16:27.514 "uuid": "60cd0049-dc9c-4665-8eb6-e690cf7e761b", 00:16:27.514 "is_configured": true, 00:16:27.514 "data_offset": 0, 00:16:27.514 "data_size": 65536 00:16:27.514 } 00:16:27.514 ] 00:16:27.514 } 00:16:27.514 } 00:16:27.514 }' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.514 BaseBdev2 00:16:27.514 BaseBdev3' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.514 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.514 [2024-12-09 22:57:43.342239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.773 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.774 "name": "Existed_Raid", 00:16:27.774 "uuid": "231008ce-a4db-452e-a72f-4fd86965c68f", 00:16:27.774 "strip_size_kb": 0, 00:16:27.774 "state": "online", 00:16:27.774 "raid_level": "raid1", 00:16:27.774 "superblock": false, 00:16:27.774 "num_base_bdevs": 3, 00:16:27.774 "num_base_bdevs_discovered": 2, 00:16:27.774 "num_base_bdevs_operational": 2, 00:16:27.774 "base_bdevs_list": [ 00:16:27.774 { 00:16:27.774 "name": null, 00:16:27.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.774 "is_configured": false, 00:16:27.774 "data_offset": 0, 00:16:27.774 "data_size": 65536 00:16:27.774 }, 00:16:27.774 { 00:16:27.774 "name": "BaseBdev2", 00:16:27.774 "uuid": "74a962aa-71eb-465a-bbd9-ee078c9e3383", 00:16:27.774 "is_configured": true, 00:16:27.774 "data_offset": 0, 00:16:27.774 "data_size": 65536 00:16:27.774 }, 00:16:27.774 { 00:16:27.774 "name": "BaseBdev3", 00:16:27.774 "uuid": "60cd0049-dc9c-4665-8eb6-e690cf7e761b", 00:16:27.774 "is_configured": true, 00:16:27.774 "data_offset": 0, 00:16:27.774 "data_size": 65536 00:16:27.774 } 00:16:27.774 ] 00:16:27.774 }' 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.774 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.343 22:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 [2024-12-09 22:57:43.942886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.343 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 [2024-12-09 22:57:44.114726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.343 [2024-12-09 22:57:44.114882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.603 [2024-12-09 22:57:44.238650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.603 [2024-12-09 22:57:44.238718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.603 [2024-12-09 22:57:44.238735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 BaseBdev2 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 [ 00:16:28.603 { 00:16:28.603 "name": "BaseBdev2", 00:16:28.603 "aliases": [ 00:16:28.603 "5aebc3d3-5852-4d7d-a829-a6d25a81900f" 00:16:28.603 ], 00:16:28.603 "product_name": "Malloc disk", 00:16:28.603 "block_size": 512, 00:16:28.603 "num_blocks": 65536, 00:16:28.603 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:28.603 "assigned_rate_limits": { 00:16:28.603 "rw_ios_per_sec": 0, 00:16:28.603 "rw_mbytes_per_sec": 0, 00:16:28.603 "r_mbytes_per_sec": 0, 00:16:28.603 "w_mbytes_per_sec": 0 00:16:28.603 }, 00:16:28.603 "claimed": false, 00:16:28.603 "zoned": false, 00:16:28.603 "supported_io_types": { 00:16:28.603 "read": true, 00:16:28.603 "write": true, 00:16:28.603 "unmap": true, 00:16:28.603 "flush": true, 00:16:28.603 "reset": true, 00:16:28.603 "nvme_admin": false, 00:16:28.603 "nvme_io": false, 00:16:28.603 "nvme_io_md": false, 00:16:28.603 "write_zeroes": true, 00:16:28.603 "zcopy": true, 00:16:28.603 "get_zone_info": false, 00:16:28.603 "zone_management": false, 00:16:28.603 "zone_append": false, 00:16:28.603 "compare": false, 00:16:28.603 "compare_and_write": false, 00:16:28.603 "abort": true, 00:16:28.603 "seek_hole": false, 00:16:28.603 "seek_data": false, 00:16:28.603 "copy": true, 00:16:28.603 "nvme_iov_md": false 00:16:28.603 }, 00:16:28.603 "memory_domains": [ 00:16:28.603 { 00:16:28.603 "dma_device_id": "system", 00:16:28.603 "dma_device_type": 1 00:16:28.603 }, 00:16:28.603 { 00:16:28.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.603 "dma_device_type": 2 00:16:28.603 } 00:16:28.603 ], 00:16:28.603 "driver_specific": {} 00:16:28.603 } 00:16:28.603 ] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 BaseBdev3 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.603 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.866 [ 00:16:28.866 { 00:16:28.866 "name": "BaseBdev3", 00:16:28.866 "aliases": [ 00:16:28.866 "c357241d-be8e-4e6f-b357-670b9f888298" 00:16:28.866 ], 00:16:28.866 "product_name": "Malloc disk", 00:16:28.866 "block_size": 512, 00:16:28.866 "num_blocks": 65536, 00:16:28.866 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:28.866 "assigned_rate_limits": { 00:16:28.866 "rw_ios_per_sec": 0, 00:16:28.866 "rw_mbytes_per_sec": 0, 00:16:28.866 "r_mbytes_per_sec": 0, 00:16:28.866 "w_mbytes_per_sec": 0 00:16:28.866 }, 00:16:28.866 "claimed": false, 00:16:28.866 "zoned": false, 00:16:28.866 "supported_io_types": { 00:16:28.866 "read": true, 00:16:28.866 "write": true, 00:16:28.866 "unmap": true, 00:16:28.866 "flush": true, 00:16:28.866 "reset": true, 00:16:28.866 "nvme_admin": false, 00:16:28.866 "nvme_io": false, 00:16:28.866 "nvme_io_md": false, 00:16:28.866 "write_zeroes": true, 00:16:28.866 "zcopy": true, 00:16:28.866 "get_zone_info": false, 00:16:28.866 "zone_management": false, 00:16:28.866 "zone_append": false, 00:16:28.866 "compare": false, 00:16:28.866 "compare_and_write": false, 00:16:28.866 "abort": true, 00:16:28.866 "seek_hole": false, 00:16:28.866 "seek_data": false, 00:16:28.866 "copy": true, 00:16:28.866 "nvme_iov_md": false 00:16:28.866 }, 00:16:28.866 "memory_domains": [ 00:16:28.866 { 00:16:28.866 "dma_device_id": "system", 00:16:28.866 "dma_device_type": 1 00:16:28.866 }, 00:16:28.866 { 00:16:28.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.866 "dma_device_type": 2 00:16:28.866 } 00:16:28.866 ], 00:16:28.866 "driver_specific": {} 00:16:28.866 } 00:16:28.866 ] 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.866 [2024-12-09 22:57:44.490907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.866 [2024-12-09 22:57:44.490979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.866 [2024-12-09 22:57:44.491008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.866 [2024-12-09 22:57:44.493384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.866 "name": "Existed_Raid", 00:16:28.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.866 "strip_size_kb": 0, 00:16:28.866 "state": "configuring", 00:16:28.866 "raid_level": "raid1", 00:16:28.866 "superblock": false, 00:16:28.866 "num_base_bdevs": 3, 00:16:28.866 "num_base_bdevs_discovered": 2, 00:16:28.866 "num_base_bdevs_operational": 3, 00:16:28.866 "base_bdevs_list": [ 00:16:28.866 { 00:16:28.866 "name": "BaseBdev1", 00:16:28.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.866 "is_configured": false, 00:16:28.866 "data_offset": 0, 00:16:28.866 "data_size": 0 00:16:28.866 }, 00:16:28.866 { 00:16:28.866 "name": "BaseBdev2", 00:16:28.866 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:28.866 "is_configured": true, 00:16:28.866 "data_offset": 0, 00:16:28.866 "data_size": 65536 00:16:28.866 }, 00:16:28.866 { 00:16:28.866 "name": "BaseBdev3", 00:16:28.866 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:28.866 "is_configured": true, 00:16:28.866 "data_offset": 0, 00:16:28.866 "data_size": 65536 00:16:28.866 } 00:16:28.866 ] 00:16:28.866 }' 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.866 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.125 [2024-12-09 22:57:44.962175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.125 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.126 22:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.384 22:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.384 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.384 "name": "Existed_Raid", 00:16:29.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.384 "strip_size_kb": 0, 00:16:29.384 "state": "configuring", 00:16:29.384 "raid_level": "raid1", 00:16:29.384 "superblock": false, 00:16:29.384 "num_base_bdevs": 3, 00:16:29.384 "num_base_bdevs_discovered": 1, 00:16:29.384 "num_base_bdevs_operational": 3, 00:16:29.384 "base_bdevs_list": [ 00:16:29.384 { 00:16:29.384 "name": "BaseBdev1", 00:16:29.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.384 "is_configured": false, 00:16:29.384 "data_offset": 0, 00:16:29.384 "data_size": 0 00:16:29.384 }, 00:16:29.384 { 00:16:29.384 "name": null, 00:16:29.384 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:29.384 "is_configured": false, 00:16:29.384 "data_offset": 0, 00:16:29.384 "data_size": 65536 00:16:29.384 }, 00:16:29.384 { 00:16:29.384 "name": "BaseBdev3", 00:16:29.384 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:29.384 "is_configured": true, 00:16:29.384 "data_offset": 0, 00:16:29.384 "data_size": 65536 00:16:29.384 } 00:16:29.384 ] 00:16:29.384 }' 00:16:29.384 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.384 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.642 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.902 [2024-12-09 22:57:45.525623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.902 BaseBdev1 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.902 [ 00:16:29.902 { 00:16:29.902 "name": "BaseBdev1", 00:16:29.902 "aliases": [ 00:16:29.902 "c3716d19-8f79-406f-87bc-4f52ce08e3c6" 00:16:29.902 ], 00:16:29.902 "product_name": "Malloc disk", 00:16:29.902 "block_size": 512, 00:16:29.902 "num_blocks": 65536, 00:16:29.902 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:29.902 "assigned_rate_limits": { 00:16:29.902 "rw_ios_per_sec": 0, 00:16:29.902 "rw_mbytes_per_sec": 0, 00:16:29.902 "r_mbytes_per_sec": 0, 00:16:29.902 "w_mbytes_per_sec": 0 00:16:29.902 }, 00:16:29.902 "claimed": true, 00:16:29.902 "claim_type": "exclusive_write", 00:16:29.902 "zoned": false, 00:16:29.902 "supported_io_types": { 00:16:29.902 "read": true, 00:16:29.902 "write": true, 00:16:29.902 "unmap": true, 00:16:29.902 "flush": true, 00:16:29.902 "reset": true, 00:16:29.902 "nvme_admin": false, 00:16:29.902 "nvme_io": false, 00:16:29.902 "nvme_io_md": false, 00:16:29.902 "write_zeroes": true, 00:16:29.902 "zcopy": true, 00:16:29.902 "get_zone_info": false, 00:16:29.902 "zone_management": false, 00:16:29.902 "zone_append": false, 00:16:29.902 "compare": false, 00:16:29.902 "compare_and_write": false, 00:16:29.902 "abort": true, 00:16:29.902 "seek_hole": false, 00:16:29.902 "seek_data": false, 00:16:29.902 "copy": true, 00:16:29.902 "nvme_iov_md": false 00:16:29.902 }, 00:16:29.902 "memory_domains": [ 00:16:29.902 { 00:16:29.902 "dma_device_id": "system", 00:16:29.902 "dma_device_type": 1 00:16:29.902 }, 00:16:29.902 { 00:16:29.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.902 "dma_device_type": 2 00:16:29.902 } 00:16:29.902 ], 00:16:29.902 "driver_specific": {} 00:16:29.902 } 00:16:29.902 ] 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.902 "name": "Existed_Raid", 00:16:29.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.902 "strip_size_kb": 0, 00:16:29.902 "state": "configuring", 00:16:29.902 "raid_level": "raid1", 00:16:29.902 "superblock": false, 00:16:29.902 "num_base_bdevs": 3, 00:16:29.902 "num_base_bdevs_discovered": 2, 00:16:29.902 "num_base_bdevs_operational": 3, 00:16:29.902 "base_bdevs_list": [ 00:16:29.902 { 00:16:29.902 "name": "BaseBdev1", 00:16:29.902 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:29.902 "is_configured": true, 00:16:29.902 "data_offset": 0, 00:16:29.902 "data_size": 65536 00:16:29.902 }, 00:16:29.902 { 00:16:29.902 "name": null, 00:16:29.902 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:29.902 "is_configured": false, 00:16:29.902 "data_offset": 0, 00:16:29.902 "data_size": 65536 00:16:29.902 }, 00:16:29.902 { 00:16:29.902 "name": "BaseBdev3", 00:16:29.902 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:29.902 "is_configured": true, 00:16:29.902 "data_offset": 0, 00:16:29.902 "data_size": 65536 00:16:29.902 } 00:16:29.902 ] 00:16:29.902 }' 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.902 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.162 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.162 22:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.162 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.162 22:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.429 [2024-12-09 22:57:46.028872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.429 "name": "Existed_Raid", 00:16:30.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.429 "strip_size_kb": 0, 00:16:30.429 "state": "configuring", 00:16:30.429 "raid_level": "raid1", 00:16:30.429 "superblock": false, 00:16:30.429 "num_base_bdevs": 3, 00:16:30.429 "num_base_bdevs_discovered": 1, 00:16:30.429 "num_base_bdevs_operational": 3, 00:16:30.429 "base_bdevs_list": [ 00:16:30.429 { 00:16:30.429 "name": "BaseBdev1", 00:16:30.429 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:30.429 "is_configured": true, 00:16:30.429 "data_offset": 0, 00:16:30.429 "data_size": 65536 00:16:30.429 }, 00:16:30.429 { 00:16:30.429 "name": null, 00:16:30.429 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:30.429 "is_configured": false, 00:16:30.429 "data_offset": 0, 00:16:30.429 "data_size": 65536 00:16:30.429 }, 00:16:30.429 { 00:16:30.429 "name": null, 00:16:30.429 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:30.429 "is_configured": false, 00:16:30.429 "data_offset": 0, 00:16:30.429 "data_size": 65536 00:16:30.429 } 00:16:30.429 ] 00:16:30.429 }' 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.429 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 [2024-12-09 22:57:46.520377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.708 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.967 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.967 "name": "Existed_Raid", 00:16:30.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.967 "strip_size_kb": 0, 00:16:30.967 "state": "configuring", 00:16:30.967 "raid_level": "raid1", 00:16:30.967 "superblock": false, 00:16:30.967 "num_base_bdevs": 3, 00:16:30.967 "num_base_bdevs_discovered": 2, 00:16:30.967 "num_base_bdevs_operational": 3, 00:16:30.967 "base_bdevs_list": [ 00:16:30.967 { 00:16:30.967 "name": "BaseBdev1", 00:16:30.967 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:30.967 "is_configured": true, 00:16:30.967 "data_offset": 0, 00:16:30.967 "data_size": 65536 00:16:30.967 }, 00:16:30.967 { 00:16:30.967 "name": null, 00:16:30.967 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:30.967 "is_configured": false, 00:16:30.967 "data_offset": 0, 00:16:30.967 "data_size": 65536 00:16:30.967 }, 00:16:30.967 { 00:16:30.967 "name": "BaseBdev3", 00:16:30.967 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:30.967 "is_configured": true, 00:16:30.967 "data_offset": 0, 00:16:30.967 "data_size": 65536 00:16:30.967 } 00:16:30.967 ] 00:16:30.967 }' 00:16:30.967 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.967 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.226 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.226 22:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.226 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.226 22:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.226 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.226 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:31.226 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.226 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.226 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.226 [2024-12-09 22:57:47.031563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.484 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.484 "name": "Existed_Raid", 00:16:31.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.484 "strip_size_kb": 0, 00:16:31.484 "state": "configuring", 00:16:31.484 "raid_level": "raid1", 00:16:31.484 "superblock": false, 00:16:31.484 "num_base_bdevs": 3, 00:16:31.484 "num_base_bdevs_discovered": 1, 00:16:31.484 "num_base_bdevs_operational": 3, 00:16:31.484 "base_bdevs_list": [ 00:16:31.484 { 00:16:31.484 "name": null, 00:16:31.484 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:31.484 "is_configured": false, 00:16:31.484 "data_offset": 0, 00:16:31.484 "data_size": 65536 00:16:31.484 }, 00:16:31.484 { 00:16:31.484 "name": null, 00:16:31.484 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:31.484 "is_configured": false, 00:16:31.484 "data_offset": 0, 00:16:31.484 "data_size": 65536 00:16:31.484 }, 00:16:31.484 { 00:16:31.484 "name": "BaseBdev3", 00:16:31.485 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:31.485 "is_configured": true, 00:16:31.485 "data_offset": 0, 00:16:31.485 "data_size": 65536 00:16:31.485 } 00:16:31.485 ] 00:16:31.485 }' 00:16:31.485 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.485 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.743 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.743 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.743 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.743 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:31.743 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.001 [2024-12-09 22:57:47.621997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.001 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.001 "name": "Existed_Raid", 00:16:32.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.001 "strip_size_kb": 0, 00:16:32.001 "state": "configuring", 00:16:32.001 "raid_level": "raid1", 00:16:32.001 "superblock": false, 00:16:32.001 "num_base_bdevs": 3, 00:16:32.001 "num_base_bdevs_discovered": 2, 00:16:32.001 "num_base_bdevs_operational": 3, 00:16:32.001 "base_bdevs_list": [ 00:16:32.002 { 00:16:32.002 "name": null, 00:16:32.002 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:32.002 "is_configured": false, 00:16:32.002 "data_offset": 0, 00:16:32.002 "data_size": 65536 00:16:32.002 }, 00:16:32.002 { 00:16:32.002 "name": "BaseBdev2", 00:16:32.002 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:32.002 "is_configured": true, 00:16:32.002 "data_offset": 0, 00:16:32.002 "data_size": 65536 00:16:32.002 }, 00:16:32.002 { 00:16:32.002 "name": "BaseBdev3", 00:16:32.002 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:32.002 "is_configured": true, 00:16:32.002 "data_offset": 0, 00:16:32.002 "data_size": 65536 00:16:32.002 } 00:16:32.002 ] 00:16:32.002 }' 00:16:32.002 22:57:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.002 22:57:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.261 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.520 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3716d19-8f79-406f-87bc-4f52ce08e3c6 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.521 [2024-12-09 22:57:48.174693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:32.521 [2024-12-09 22:57:48.174790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.521 [2024-12-09 22:57:48.174800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:32.521 [2024-12-09 22:57:48.175155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:32.521 [2024-12-09 22:57:48.175341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.521 [2024-12-09 22:57:48.175357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:32.521 NewBaseBdev 00:16:32.521 [2024-12-09 22:57:48.175704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.521 [ 00:16:32.521 { 00:16:32.521 "name": "NewBaseBdev", 00:16:32.521 "aliases": [ 00:16:32.521 "c3716d19-8f79-406f-87bc-4f52ce08e3c6" 00:16:32.521 ], 00:16:32.521 "product_name": "Malloc disk", 00:16:32.521 "block_size": 512, 00:16:32.521 "num_blocks": 65536, 00:16:32.521 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:32.521 "assigned_rate_limits": { 00:16:32.521 "rw_ios_per_sec": 0, 00:16:32.521 "rw_mbytes_per_sec": 0, 00:16:32.521 "r_mbytes_per_sec": 0, 00:16:32.521 "w_mbytes_per_sec": 0 00:16:32.521 }, 00:16:32.521 "claimed": true, 00:16:32.521 "claim_type": "exclusive_write", 00:16:32.521 "zoned": false, 00:16:32.521 "supported_io_types": { 00:16:32.521 "read": true, 00:16:32.521 "write": true, 00:16:32.521 "unmap": true, 00:16:32.521 "flush": true, 00:16:32.521 "reset": true, 00:16:32.521 "nvme_admin": false, 00:16:32.521 "nvme_io": false, 00:16:32.521 "nvme_io_md": false, 00:16:32.521 "write_zeroes": true, 00:16:32.521 "zcopy": true, 00:16:32.521 "get_zone_info": false, 00:16:32.521 "zone_management": false, 00:16:32.521 "zone_append": false, 00:16:32.521 "compare": false, 00:16:32.521 "compare_and_write": false, 00:16:32.521 "abort": true, 00:16:32.521 "seek_hole": false, 00:16:32.521 "seek_data": false, 00:16:32.521 "copy": true, 00:16:32.521 "nvme_iov_md": false 00:16:32.521 }, 00:16:32.521 "memory_domains": [ 00:16:32.521 { 00:16:32.521 "dma_device_id": "system", 00:16:32.521 "dma_device_type": 1 00:16:32.521 }, 00:16:32.521 { 00:16:32.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.521 "dma_device_type": 2 00:16:32.521 } 00:16:32.521 ], 00:16:32.521 "driver_specific": {} 00:16:32.521 } 00:16:32.521 ] 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.521 "name": "Existed_Raid", 00:16:32.521 "uuid": "b4659d8b-9004-460a-8680-2956321f658c", 00:16:32.521 "strip_size_kb": 0, 00:16:32.521 "state": "online", 00:16:32.521 "raid_level": "raid1", 00:16:32.521 "superblock": false, 00:16:32.521 "num_base_bdevs": 3, 00:16:32.521 "num_base_bdevs_discovered": 3, 00:16:32.521 "num_base_bdevs_operational": 3, 00:16:32.521 "base_bdevs_list": [ 00:16:32.521 { 00:16:32.521 "name": "NewBaseBdev", 00:16:32.521 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:32.521 "is_configured": true, 00:16:32.521 "data_offset": 0, 00:16:32.521 "data_size": 65536 00:16:32.521 }, 00:16:32.521 { 00:16:32.521 "name": "BaseBdev2", 00:16:32.521 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:32.521 "is_configured": true, 00:16:32.521 "data_offset": 0, 00:16:32.521 "data_size": 65536 00:16:32.521 }, 00:16:32.521 { 00:16:32.521 "name": "BaseBdev3", 00:16:32.521 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:32.521 "is_configured": true, 00:16:32.521 "data_offset": 0, 00:16:32.521 "data_size": 65536 00:16:32.521 } 00:16:32.521 ] 00:16:32.521 }' 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.521 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.823 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.823 [2024-12-09 22:57:48.678328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.083 "name": "Existed_Raid", 00:16:33.083 "aliases": [ 00:16:33.083 "b4659d8b-9004-460a-8680-2956321f658c" 00:16:33.083 ], 00:16:33.083 "product_name": "Raid Volume", 00:16:33.083 "block_size": 512, 00:16:33.083 "num_blocks": 65536, 00:16:33.083 "uuid": "b4659d8b-9004-460a-8680-2956321f658c", 00:16:33.083 "assigned_rate_limits": { 00:16:33.083 "rw_ios_per_sec": 0, 00:16:33.083 "rw_mbytes_per_sec": 0, 00:16:33.083 "r_mbytes_per_sec": 0, 00:16:33.083 "w_mbytes_per_sec": 0 00:16:33.083 }, 00:16:33.083 "claimed": false, 00:16:33.083 "zoned": false, 00:16:33.083 "supported_io_types": { 00:16:33.083 "read": true, 00:16:33.083 "write": true, 00:16:33.083 "unmap": false, 00:16:33.083 "flush": false, 00:16:33.083 "reset": true, 00:16:33.083 "nvme_admin": false, 00:16:33.083 "nvme_io": false, 00:16:33.083 "nvme_io_md": false, 00:16:33.083 "write_zeroes": true, 00:16:33.083 "zcopy": false, 00:16:33.083 "get_zone_info": false, 00:16:33.083 "zone_management": false, 00:16:33.083 "zone_append": false, 00:16:33.083 "compare": false, 00:16:33.083 "compare_and_write": false, 00:16:33.083 "abort": false, 00:16:33.083 "seek_hole": false, 00:16:33.083 "seek_data": false, 00:16:33.083 "copy": false, 00:16:33.083 "nvme_iov_md": false 00:16:33.083 }, 00:16:33.083 "memory_domains": [ 00:16:33.083 { 00:16:33.083 "dma_device_id": "system", 00:16:33.083 "dma_device_type": 1 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.083 "dma_device_type": 2 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "dma_device_id": "system", 00:16:33.083 "dma_device_type": 1 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.083 "dma_device_type": 2 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "dma_device_id": "system", 00:16:33.083 "dma_device_type": 1 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.083 "dma_device_type": 2 00:16:33.083 } 00:16:33.083 ], 00:16:33.083 "driver_specific": { 00:16:33.083 "raid": { 00:16:33.083 "uuid": "b4659d8b-9004-460a-8680-2956321f658c", 00:16:33.083 "strip_size_kb": 0, 00:16:33.083 "state": "online", 00:16:33.083 "raid_level": "raid1", 00:16:33.083 "superblock": false, 00:16:33.083 "num_base_bdevs": 3, 00:16:33.083 "num_base_bdevs_discovered": 3, 00:16:33.083 "num_base_bdevs_operational": 3, 00:16:33.083 "base_bdevs_list": [ 00:16:33.083 { 00:16:33.083 "name": "NewBaseBdev", 00:16:33.083 "uuid": "c3716d19-8f79-406f-87bc-4f52ce08e3c6", 00:16:33.083 "is_configured": true, 00:16:33.083 "data_offset": 0, 00:16:33.083 "data_size": 65536 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "name": "BaseBdev2", 00:16:33.083 "uuid": "5aebc3d3-5852-4d7d-a829-a6d25a81900f", 00:16:33.083 "is_configured": true, 00:16:33.083 "data_offset": 0, 00:16:33.083 "data_size": 65536 00:16:33.083 }, 00:16:33.083 { 00:16:33.083 "name": "BaseBdev3", 00:16:33.083 "uuid": "c357241d-be8e-4e6f-b357-670b9f888298", 00:16:33.083 "is_configured": true, 00:16:33.083 "data_offset": 0, 00:16:33.083 "data_size": 65536 00:16:33.083 } 00:16:33.083 ] 00:16:33.083 } 00:16:33.083 } 00:16:33.083 }' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.083 BaseBdev2 00:16:33.083 BaseBdev3' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.083 [2024-12-09 22:57:48.921658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.083 [2024-12-09 22:57:48.921718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.083 [2024-12-09 22:57:48.921835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.083 [2024-12-09 22:57:48.922213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.083 [2024-12-09 22:57:48.922247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67922 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67922 ']' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67922 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.083 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67922 00:16:33.345 killing process with pid 67922 00:16:33.345 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.345 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.345 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67922' 00:16:33.345 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67922 00:16:33.345 [2024-12-09 22:57:48.966739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.345 22:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67922 00:16:33.604 [2024-12-09 22:57:49.362774] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.978 22:57:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.978 00:16:34.978 real 0m11.412s 00:16:34.978 user 0m17.705s 00:16:34.978 sys 0m2.023s 00:16:34.978 22:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.978 22:57:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.978 ************************************ 00:16:34.978 END TEST raid_state_function_test 00:16:34.978 ************************************ 00:16:35.237 22:57:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:35.237 22:57:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:35.237 22:57:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.237 22:57:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.237 ************************************ 00:16:35.237 START TEST raid_state_function_test_sb 00:16:35.237 ************************************ 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68554 00:16:35.237 Process raid pid: 68554 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68554' 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68554 00:16:35.237 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68554 ']' 00:16:35.238 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.238 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.238 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.238 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.238 22:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.238 [2024-12-09 22:57:51.025950] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:35.238 [2024-12-09 22:57:51.026182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.497 [2024-12-09 22:57:51.214102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.755 [2024-12-09 22:57:51.381729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.014 [2024-12-09 22:57:51.664371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.014 [2024-12-09 22:57:51.664447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.273 [2024-12-09 22:57:51.984741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.273 [2024-12-09 22:57:51.984824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.273 [2024-12-09 22:57:51.984844] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.273 [2024-12-09 22:57:51.984857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.273 [2024-12-09 22:57:51.984865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.273 [2024-12-09 22:57:51.984876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.273 22:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.273 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.273 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.273 "name": "Existed_Raid", 00:16:36.273 "uuid": "9b969ff0-afa4-42d2-a1b9-900de6aeadb5", 00:16:36.273 "strip_size_kb": 0, 00:16:36.273 "state": "configuring", 00:16:36.273 "raid_level": "raid1", 00:16:36.273 "superblock": true, 00:16:36.273 "num_base_bdevs": 3, 00:16:36.273 "num_base_bdevs_discovered": 0, 00:16:36.273 "num_base_bdevs_operational": 3, 00:16:36.273 "base_bdevs_list": [ 00:16:36.273 { 00:16:36.273 "name": "BaseBdev1", 00:16:36.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.273 "is_configured": false, 00:16:36.273 "data_offset": 0, 00:16:36.273 "data_size": 0 00:16:36.273 }, 00:16:36.273 { 00:16:36.273 "name": "BaseBdev2", 00:16:36.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.273 "is_configured": false, 00:16:36.273 "data_offset": 0, 00:16:36.273 "data_size": 0 00:16:36.273 }, 00:16:36.273 { 00:16:36.273 "name": "BaseBdev3", 00:16:36.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.273 "is_configured": false, 00:16:36.273 "data_offset": 0, 00:16:36.273 "data_size": 0 00:16:36.273 } 00:16:36.273 ] 00:16:36.273 }' 00:16:36.273 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.273 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 [2024-12-09 22:57:52.440065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.842 [2024-12-09 22:57:52.440133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 [2024-12-09 22:57:52.448027] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.842 [2024-12-09 22:57:52.448100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.842 [2024-12-09 22:57:52.448120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.842 [2024-12-09 22:57:52.448136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.842 [2024-12-09 22:57:52.448144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.842 [2024-12-09 22:57:52.448155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 [2024-12-09 22:57:52.509126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.842 BaseBdev1 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 [ 00:16:36.842 { 00:16:36.842 "name": "BaseBdev1", 00:16:36.842 "aliases": [ 00:16:36.842 "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec" 00:16:36.842 ], 00:16:36.842 "product_name": "Malloc disk", 00:16:36.842 "block_size": 512, 00:16:36.842 "num_blocks": 65536, 00:16:36.842 "uuid": "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec", 00:16:36.842 "assigned_rate_limits": { 00:16:36.842 "rw_ios_per_sec": 0, 00:16:36.842 "rw_mbytes_per_sec": 0, 00:16:36.842 "r_mbytes_per_sec": 0, 00:16:36.842 "w_mbytes_per_sec": 0 00:16:36.842 }, 00:16:36.842 "claimed": true, 00:16:36.842 "claim_type": "exclusive_write", 00:16:36.842 "zoned": false, 00:16:36.842 "supported_io_types": { 00:16:36.842 "read": true, 00:16:36.842 "write": true, 00:16:36.842 "unmap": true, 00:16:36.842 "flush": true, 00:16:36.842 "reset": true, 00:16:36.842 "nvme_admin": false, 00:16:36.842 "nvme_io": false, 00:16:36.842 "nvme_io_md": false, 00:16:36.842 "write_zeroes": true, 00:16:36.842 "zcopy": true, 00:16:36.842 "get_zone_info": false, 00:16:36.842 "zone_management": false, 00:16:36.842 "zone_append": false, 00:16:36.842 "compare": false, 00:16:36.842 "compare_and_write": false, 00:16:36.842 "abort": true, 00:16:36.842 "seek_hole": false, 00:16:36.842 "seek_data": false, 00:16:36.842 "copy": true, 00:16:36.842 "nvme_iov_md": false 00:16:36.842 }, 00:16:36.842 "memory_domains": [ 00:16:36.842 { 00:16:36.842 "dma_device_id": "system", 00:16:36.842 "dma_device_type": 1 00:16:36.842 }, 00:16:36.842 { 00:16:36.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.842 "dma_device_type": 2 00:16:36.842 } 00:16:36.842 ], 00:16:36.842 "driver_specific": {} 00:16:36.842 } 00:16:36.842 ] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.842 "name": "Existed_Raid", 00:16:36.842 "uuid": "7ef71137-cf1a-48a2-b236-1a1fe1dbc7f8", 00:16:36.842 "strip_size_kb": 0, 00:16:36.842 "state": "configuring", 00:16:36.842 "raid_level": "raid1", 00:16:36.842 "superblock": true, 00:16:36.842 "num_base_bdevs": 3, 00:16:36.842 "num_base_bdevs_discovered": 1, 00:16:36.842 "num_base_bdevs_operational": 3, 00:16:36.842 "base_bdevs_list": [ 00:16:36.842 { 00:16:36.842 "name": "BaseBdev1", 00:16:36.842 "uuid": "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec", 00:16:36.842 "is_configured": true, 00:16:36.842 "data_offset": 2048, 00:16:36.842 "data_size": 63488 00:16:36.842 }, 00:16:36.842 { 00:16:36.842 "name": "BaseBdev2", 00:16:36.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.842 "is_configured": false, 00:16:36.842 "data_offset": 0, 00:16:36.842 "data_size": 0 00:16:36.842 }, 00:16:36.842 { 00:16:36.842 "name": "BaseBdev3", 00:16:36.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.842 "is_configured": false, 00:16:36.842 "data_offset": 0, 00:16:36.842 "data_size": 0 00:16:36.842 } 00:16:36.842 ] 00:16:36.842 }' 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.842 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 [2024-12-09 22:57:52.964628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.411 [2024-12-09 22:57:52.964718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 [2024-12-09 22:57:52.976708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.411 [2024-12-09 22:57:52.979405] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.411 [2024-12-09 22:57:52.979480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.411 [2024-12-09 22:57:52.979493] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.411 [2024-12-09 22:57:52.979505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.411 22:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.411 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.411 "name": "Existed_Raid", 00:16:37.411 "uuid": "e9b38fe9-a8bc-4723-b956-764a932ca329", 00:16:37.411 "strip_size_kb": 0, 00:16:37.411 "state": "configuring", 00:16:37.411 "raid_level": "raid1", 00:16:37.411 "superblock": true, 00:16:37.411 "num_base_bdevs": 3, 00:16:37.411 "num_base_bdevs_discovered": 1, 00:16:37.411 "num_base_bdevs_operational": 3, 00:16:37.411 "base_bdevs_list": [ 00:16:37.411 { 00:16:37.411 "name": "BaseBdev1", 00:16:37.411 "uuid": "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec", 00:16:37.411 "is_configured": true, 00:16:37.411 "data_offset": 2048, 00:16:37.411 "data_size": 63488 00:16:37.411 }, 00:16:37.411 { 00:16:37.411 "name": "BaseBdev2", 00:16:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.411 "is_configured": false, 00:16:37.411 "data_offset": 0, 00:16:37.411 "data_size": 0 00:16:37.411 }, 00:16:37.411 { 00:16:37.411 "name": "BaseBdev3", 00:16:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.411 "is_configured": false, 00:16:37.411 "data_offset": 0, 00:16:37.411 "data_size": 0 00:16:37.411 } 00:16:37.411 ] 00:16:37.411 }' 00:16:37.411 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.411 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 [2024-12-09 22:57:53.418621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.670 BaseBdev2 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 [ 00:16:37.670 { 00:16:37.670 "name": "BaseBdev2", 00:16:37.670 "aliases": [ 00:16:37.670 "f37df229-af0a-430a-898c-a28489f8ae4c" 00:16:37.670 ], 00:16:37.670 "product_name": "Malloc disk", 00:16:37.670 "block_size": 512, 00:16:37.670 "num_blocks": 65536, 00:16:37.670 "uuid": "f37df229-af0a-430a-898c-a28489f8ae4c", 00:16:37.670 "assigned_rate_limits": { 00:16:37.670 "rw_ios_per_sec": 0, 00:16:37.670 "rw_mbytes_per_sec": 0, 00:16:37.670 "r_mbytes_per_sec": 0, 00:16:37.670 "w_mbytes_per_sec": 0 00:16:37.670 }, 00:16:37.670 "claimed": true, 00:16:37.670 "claim_type": "exclusive_write", 00:16:37.670 "zoned": false, 00:16:37.670 "supported_io_types": { 00:16:37.670 "read": true, 00:16:37.670 "write": true, 00:16:37.670 "unmap": true, 00:16:37.670 "flush": true, 00:16:37.670 "reset": true, 00:16:37.670 "nvme_admin": false, 00:16:37.670 "nvme_io": false, 00:16:37.670 "nvme_io_md": false, 00:16:37.670 "write_zeroes": true, 00:16:37.670 "zcopy": true, 00:16:37.670 "get_zone_info": false, 00:16:37.670 "zone_management": false, 00:16:37.670 "zone_append": false, 00:16:37.670 "compare": false, 00:16:37.670 "compare_and_write": false, 00:16:37.670 "abort": true, 00:16:37.670 "seek_hole": false, 00:16:37.670 "seek_data": false, 00:16:37.670 "copy": true, 00:16:37.670 "nvme_iov_md": false 00:16:37.670 }, 00:16:37.670 "memory_domains": [ 00:16:37.670 { 00:16:37.670 "dma_device_id": "system", 00:16:37.670 "dma_device_type": 1 00:16:37.670 }, 00:16:37.670 { 00:16:37.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.670 "dma_device_type": 2 00:16:37.670 } 00:16:37.670 ], 00:16:37.670 "driver_specific": {} 00:16:37.670 } 00:16:37.670 ] 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.670 "name": "Existed_Raid", 00:16:37.670 "uuid": "e9b38fe9-a8bc-4723-b956-764a932ca329", 00:16:37.670 "strip_size_kb": 0, 00:16:37.670 "state": "configuring", 00:16:37.670 "raid_level": "raid1", 00:16:37.670 "superblock": true, 00:16:37.670 "num_base_bdevs": 3, 00:16:37.670 "num_base_bdevs_discovered": 2, 00:16:37.670 "num_base_bdevs_operational": 3, 00:16:37.670 "base_bdevs_list": [ 00:16:37.670 { 00:16:37.670 "name": "BaseBdev1", 00:16:37.670 "uuid": "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec", 00:16:37.670 "is_configured": true, 00:16:37.670 "data_offset": 2048, 00:16:37.670 "data_size": 63488 00:16:37.670 }, 00:16:37.670 { 00:16:37.670 "name": "BaseBdev2", 00:16:37.670 "uuid": "f37df229-af0a-430a-898c-a28489f8ae4c", 00:16:37.670 "is_configured": true, 00:16:37.670 "data_offset": 2048, 00:16:37.670 "data_size": 63488 00:16:37.670 }, 00:16:37.670 { 00:16:37.670 "name": "BaseBdev3", 00:16:37.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.670 "is_configured": false, 00:16:37.670 "data_offset": 0, 00:16:37.670 "data_size": 0 00:16:37.670 } 00:16:37.670 ] 00:16:37.670 }' 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.670 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.238 [2024-12-09 22:57:53.992944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.238 [2024-12-09 22:57:53.993273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.238 [2024-12-09 22:57:53.993304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.238 BaseBdev3 00:16:38.238 [2024-12-09 22:57:53.993865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:38.238 [2024-12-09 22:57:53.994077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.238 [2024-12-09 22:57:53.994096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.238 [2024-12-09 22:57:53.994283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.238 22:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.238 [ 00:16:38.238 { 00:16:38.238 "name": "BaseBdev3", 00:16:38.238 "aliases": [ 00:16:38.238 "c1e9e069-2adf-4411-8b4a-217ccff9cc7a" 00:16:38.238 ], 00:16:38.238 "product_name": "Malloc disk", 00:16:38.238 "block_size": 512, 00:16:38.238 "num_blocks": 65536, 00:16:38.238 "uuid": "c1e9e069-2adf-4411-8b4a-217ccff9cc7a", 00:16:38.238 "assigned_rate_limits": { 00:16:38.238 "rw_ios_per_sec": 0, 00:16:38.238 "rw_mbytes_per_sec": 0, 00:16:38.238 "r_mbytes_per_sec": 0, 00:16:38.238 "w_mbytes_per_sec": 0 00:16:38.238 }, 00:16:38.238 "claimed": true, 00:16:38.238 "claim_type": "exclusive_write", 00:16:38.238 "zoned": false, 00:16:38.238 "supported_io_types": { 00:16:38.238 "read": true, 00:16:38.238 "write": true, 00:16:38.238 "unmap": true, 00:16:38.238 "flush": true, 00:16:38.238 "reset": true, 00:16:38.238 "nvme_admin": false, 00:16:38.238 "nvme_io": false, 00:16:38.238 "nvme_io_md": false, 00:16:38.238 "write_zeroes": true, 00:16:38.238 "zcopy": true, 00:16:38.238 "get_zone_info": false, 00:16:38.238 "zone_management": false, 00:16:38.238 "zone_append": false, 00:16:38.238 "compare": false, 00:16:38.238 "compare_and_write": false, 00:16:38.238 "abort": true, 00:16:38.238 "seek_hole": false, 00:16:38.238 "seek_data": false, 00:16:38.238 "copy": true, 00:16:38.238 "nvme_iov_md": false 00:16:38.238 }, 00:16:38.238 "memory_domains": [ 00:16:38.238 { 00:16:38.238 "dma_device_id": "system", 00:16:38.238 "dma_device_type": 1 00:16:38.238 }, 00:16:38.238 { 00:16:38.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.238 "dma_device_type": 2 00:16:38.238 } 00:16:38.238 ], 00:16:38.238 "driver_specific": {} 00:16:38.238 } 00:16:38.238 ] 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.238 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.239 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.239 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.239 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.239 "name": "Existed_Raid", 00:16:38.239 "uuid": "e9b38fe9-a8bc-4723-b956-764a932ca329", 00:16:38.239 "strip_size_kb": 0, 00:16:38.239 "state": "online", 00:16:38.239 "raid_level": "raid1", 00:16:38.239 "superblock": true, 00:16:38.239 "num_base_bdevs": 3, 00:16:38.239 "num_base_bdevs_discovered": 3, 00:16:38.239 "num_base_bdevs_operational": 3, 00:16:38.239 "base_bdevs_list": [ 00:16:38.239 { 00:16:38.239 "name": "BaseBdev1", 00:16:38.239 "uuid": "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec", 00:16:38.239 "is_configured": true, 00:16:38.239 "data_offset": 2048, 00:16:38.239 "data_size": 63488 00:16:38.239 }, 00:16:38.239 { 00:16:38.239 "name": "BaseBdev2", 00:16:38.239 "uuid": "f37df229-af0a-430a-898c-a28489f8ae4c", 00:16:38.239 "is_configured": true, 00:16:38.239 "data_offset": 2048, 00:16:38.239 "data_size": 63488 00:16:38.239 }, 00:16:38.239 { 00:16:38.239 "name": "BaseBdev3", 00:16:38.239 "uuid": "c1e9e069-2adf-4411-8b4a-217ccff9cc7a", 00:16:38.239 "is_configured": true, 00:16:38.239 "data_offset": 2048, 00:16:38.239 "data_size": 63488 00:16:38.239 } 00:16:38.239 ] 00:16:38.239 }' 00:16:38.239 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.239 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.807 [2024-12-09 22:57:54.488926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.807 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.807 "name": "Existed_Raid", 00:16:38.807 "aliases": [ 00:16:38.807 "e9b38fe9-a8bc-4723-b956-764a932ca329" 00:16:38.807 ], 00:16:38.807 "product_name": "Raid Volume", 00:16:38.807 "block_size": 512, 00:16:38.807 "num_blocks": 63488, 00:16:38.807 "uuid": "e9b38fe9-a8bc-4723-b956-764a932ca329", 00:16:38.807 "assigned_rate_limits": { 00:16:38.807 "rw_ios_per_sec": 0, 00:16:38.807 "rw_mbytes_per_sec": 0, 00:16:38.807 "r_mbytes_per_sec": 0, 00:16:38.807 "w_mbytes_per_sec": 0 00:16:38.807 }, 00:16:38.807 "claimed": false, 00:16:38.807 "zoned": false, 00:16:38.807 "supported_io_types": { 00:16:38.807 "read": true, 00:16:38.807 "write": true, 00:16:38.807 "unmap": false, 00:16:38.807 "flush": false, 00:16:38.807 "reset": true, 00:16:38.807 "nvme_admin": false, 00:16:38.807 "nvme_io": false, 00:16:38.807 "nvme_io_md": false, 00:16:38.807 "write_zeroes": true, 00:16:38.807 "zcopy": false, 00:16:38.807 "get_zone_info": false, 00:16:38.807 "zone_management": false, 00:16:38.807 "zone_append": false, 00:16:38.807 "compare": false, 00:16:38.807 "compare_and_write": false, 00:16:38.807 "abort": false, 00:16:38.807 "seek_hole": false, 00:16:38.807 "seek_data": false, 00:16:38.807 "copy": false, 00:16:38.807 "nvme_iov_md": false 00:16:38.807 }, 00:16:38.807 "memory_domains": [ 00:16:38.807 { 00:16:38.807 "dma_device_id": "system", 00:16:38.807 "dma_device_type": 1 00:16:38.807 }, 00:16:38.807 { 00:16:38.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.807 "dma_device_type": 2 00:16:38.807 }, 00:16:38.807 { 00:16:38.807 "dma_device_id": "system", 00:16:38.807 "dma_device_type": 1 00:16:38.807 }, 00:16:38.807 { 00:16:38.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.807 "dma_device_type": 2 00:16:38.807 }, 00:16:38.807 { 00:16:38.807 "dma_device_id": "system", 00:16:38.807 "dma_device_type": 1 00:16:38.807 }, 00:16:38.807 { 00:16:38.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.807 "dma_device_type": 2 00:16:38.807 } 00:16:38.807 ], 00:16:38.807 "driver_specific": { 00:16:38.807 "raid": { 00:16:38.807 "uuid": "e9b38fe9-a8bc-4723-b956-764a932ca329", 00:16:38.808 "strip_size_kb": 0, 00:16:38.808 "state": "online", 00:16:38.808 "raid_level": "raid1", 00:16:38.808 "superblock": true, 00:16:38.808 "num_base_bdevs": 3, 00:16:38.808 "num_base_bdevs_discovered": 3, 00:16:38.808 "num_base_bdevs_operational": 3, 00:16:38.808 "base_bdevs_list": [ 00:16:38.808 { 00:16:38.808 "name": "BaseBdev1", 00:16:38.808 "uuid": "ea8b301d-b0cc-4e4f-aff2-89e657eb24ec", 00:16:38.808 "is_configured": true, 00:16:38.808 "data_offset": 2048, 00:16:38.808 "data_size": 63488 00:16:38.808 }, 00:16:38.808 { 00:16:38.808 "name": "BaseBdev2", 00:16:38.808 "uuid": "f37df229-af0a-430a-898c-a28489f8ae4c", 00:16:38.808 "is_configured": true, 00:16:38.808 "data_offset": 2048, 00:16:38.808 "data_size": 63488 00:16:38.808 }, 00:16:38.808 { 00:16:38.808 "name": "BaseBdev3", 00:16:38.808 "uuid": "c1e9e069-2adf-4411-8b4a-217ccff9cc7a", 00:16:38.808 "is_configured": true, 00:16:38.808 "data_offset": 2048, 00:16:38.808 "data_size": 63488 00:16:38.808 } 00:16:38.808 ] 00:16:38.808 } 00:16:38.808 } 00:16:38.808 }' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:38.808 BaseBdev2 00:16:38.808 BaseBdev3' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.808 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.067 [2024-12-09 22:57:54.740160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.067 "name": "Existed_Raid", 00:16:39.067 "uuid": "e9b38fe9-a8bc-4723-b956-764a932ca329", 00:16:39.067 "strip_size_kb": 0, 00:16:39.067 "state": "online", 00:16:39.067 "raid_level": "raid1", 00:16:39.067 "superblock": true, 00:16:39.067 "num_base_bdevs": 3, 00:16:39.067 "num_base_bdevs_discovered": 2, 00:16:39.067 "num_base_bdevs_operational": 2, 00:16:39.067 "base_bdevs_list": [ 00:16:39.067 { 00:16:39.067 "name": null, 00:16:39.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.067 "is_configured": false, 00:16:39.067 "data_offset": 0, 00:16:39.067 "data_size": 63488 00:16:39.067 }, 00:16:39.067 { 00:16:39.067 "name": "BaseBdev2", 00:16:39.067 "uuid": "f37df229-af0a-430a-898c-a28489f8ae4c", 00:16:39.067 "is_configured": true, 00:16:39.067 "data_offset": 2048, 00:16:39.067 "data_size": 63488 00:16:39.067 }, 00:16:39.067 { 00:16:39.067 "name": "BaseBdev3", 00:16:39.067 "uuid": "c1e9e069-2adf-4411-8b4a-217ccff9cc7a", 00:16:39.067 "is_configured": true, 00:16:39.067 "data_offset": 2048, 00:16:39.067 "data_size": 63488 00:16:39.067 } 00:16:39.067 ] 00:16:39.067 }' 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.067 22:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.634 [2024-12-09 22:57:55.300928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.634 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.634 [2024-12-09 22:57:55.478513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.634 [2024-12-09 22:57:55.478676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.892 [2024-12-09 22:57:55.603081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.892 [2024-12-09 22:57:55.603172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.892 [2024-12-09 22:57:55.603188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:39.892 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.892 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.892 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.892 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.892 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.893 BaseBdev2 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.893 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.893 [ 00:16:39.893 { 00:16:39.893 "name": "BaseBdev2", 00:16:39.893 "aliases": [ 00:16:39.893 "1b9589ff-5fc4-4cd0-92b0-81030be87b37" 00:16:39.893 ], 00:16:39.893 "product_name": "Malloc disk", 00:16:39.893 "block_size": 512, 00:16:39.893 "num_blocks": 65536, 00:16:39.893 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:39.893 "assigned_rate_limits": { 00:16:39.893 "rw_ios_per_sec": 0, 00:16:39.893 "rw_mbytes_per_sec": 0, 00:16:39.893 "r_mbytes_per_sec": 0, 00:16:39.893 "w_mbytes_per_sec": 0 00:16:39.893 }, 00:16:39.893 "claimed": false, 00:16:39.893 "zoned": false, 00:16:39.893 "supported_io_types": { 00:16:39.893 "read": true, 00:16:39.893 "write": true, 00:16:39.893 "unmap": true, 00:16:39.893 "flush": true, 00:16:39.893 "reset": true, 00:16:39.893 "nvme_admin": false, 00:16:39.893 "nvme_io": false, 00:16:39.893 "nvme_io_md": false, 00:16:39.893 "write_zeroes": true, 00:16:39.893 "zcopy": true, 00:16:39.893 "get_zone_info": false, 00:16:39.893 "zone_management": false, 00:16:39.893 "zone_append": false, 00:16:40.152 "compare": false, 00:16:40.152 "compare_and_write": false, 00:16:40.152 "abort": true, 00:16:40.152 "seek_hole": false, 00:16:40.152 "seek_data": false, 00:16:40.152 "copy": true, 00:16:40.152 "nvme_iov_md": false 00:16:40.152 }, 00:16:40.152 "memory_domains": [ 00:16:40.152 { 00:16:40.152 "dma_device_id": "system", 00:16:40.152 "dma_device_type": 1 00:16:40.152 }, 00:16:40.152 { 00:16:40.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.152 "dma_device_type": 2 00:16:40.152 } 00:16:40.152 ], 00:16:40.152 "driver_specific": {} 00:16:40.152 } 00:16:40.152 ] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 BaseBdev3 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 [ 00:16:40.152 { 00:16:40.152 "name": "BaseBdev3", 00:16:40.152 "aliases": [ 00:16:40.152 "62e6025b-eff0-4d55-b10d-72facf413978" 00:16:40.152 ], 00:16:40.152 "product_name": "Malloc disk", 00:16:40.152 "block_size": 512, 00:16:40.152 "num_blocks": 65536, 00:16:40.152 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:40.152 "assigned_rate_limits": { 00:16:40.152 "rw_ios_per_sec": 0, 00:16:40.152 "rw_mbytes_per_sec": 0, 00:16:40.152 "r_mbytes_per_sec": 0, 00:16:40.152 "w_mbytes_per_sec": 0 00:16:40.152 }, 00:16:40.152 "claimed": false, 00:16:40.152 "zoned": false, 00:16:40.152 "supported_io_types": { 00:16:40.152 "read": true, 00:16:40.152 "write": true, 00:16:40.152 "unmap": true, 00:16:40.152 "flush": true, 00:16:40.152 "reset": true, 00:16:40.152 "nvme_admin": false, 00:16:40.152 "nvme_io": false, 00:16:40.152 "nvme_io_md": false, 00:16:40.152 "write_zeroes": true, 00:16:40.152 "zcopy": true, 00:16:40.152 "get_zone_info": false, 00:16:40.152 "zone_management": false, 00:16:40.152 "zone_append": false, 00:16:40.152 "compare": false, 00:16:40.152 "compare_and_write": false, 00:16:40.152 "abort": true, 00:16:40.152 "seek_hole": false, 00:16:40.152 "seek_data": false, 00:16:40.152 "copy": true, 00:16:40.152 "nvme_iov_md": false 00:16:40.152 }, 00:16:40.152 "memory_domains": [ 00:16:40.152 { 00:16:40.152 "dma_device_id": "system", 00:16:40.152 "dma_device_type": 1 00:16:40.152 }, 00:16:40.152 { 00:16:40.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.152 "dma_device_type": 2 00:16:40.152 } 00:16:40.152 ], 00:16:40.152 "driver_specific": {} 00:16:40.152 } 00:16:40.152 ] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 [2024-12-09 22:57:55.851615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.152 [2024-12-09 22:57:55.851682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.152 [2024-12-09 22:57:55.851708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.152 [2024-12-09 22:57:55.854192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.152 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.152 "name": "Existed_Raid", 00:16:40.152 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:40.152 "strip_size_kb": 0, 00:16:40.152 "state": "configuring", 00:16:40.152 "raid_level": "raid1", 00:16:40.152 "superblock": true, 00:16:40.152 "num_base_bdevs": 3, 00:16:40.152 "num_base_bdevs_discovered": 2, 00:16:40.152 "num_base_bdevs_operational": 3, 00:16:40.152 "base_bdevs_list": [ 00:16:40.152 { 00:16:40.152 "name": "BaseBdev1", 00:16:40.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.153 "is_configured": false, 00:16:40.153 "data_offset": 0, 00:16:40.153 "data_size": 0 00:16:40.153 }, 00:16:40.153 { 00:16:40.153 "name": "BaseBdev2", 00:16:40.153 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:40.153 "is_configured": true, 00:16:40.153 "data_offset": 2048, 00:16:40.153 "data_size": 63488 00:16:40.153 }, 00:16:40.153 { 00:16:40.153 "name": "BaseBdev3", 00:16:40.153 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:40.153 "is_configured": true, 00:16:40.153 "data_offset": 2048, 00:16:40.153 "data_size": 63488 00:16:40.153 } 00:16:40.153 ] 00:16:40.153 }' 00:16:40.153 22:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.153 22:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.719 [2024-12-09 22:57:56.310930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.719 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.720 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.720 "name": "Existed_Raid", 00:16:40.720 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:40.720 "strip_size_kb": 0, 00:16:40.720 "state": "configuring", 00:16:40.720 "raid_level": "raid1", 00:16:40.720 "superblock": true, 00:16:40.720 "num_base_bdevs": 3, 00:16:40.720 "num_base_bdevs_discovered": 1, 00:16:40.720 "num_base_bdevs_operational": 3, 00:16:40.720 "base_bdevs_list": [ 00:16:40.720 { 00:16:40.720 "name": "BaseBdev1", 00:16:40.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.720 "is_configured": false, 00:16:40.720 "data_offset": 0, 00:16:40.720 "data_size": 0 00:16:40.720 }, 00:16:40.720 { 00:16:40.720 "name": null, 00:16:40.720 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:40.720 "is_configured": false, 00:16:40.720 "data_offset": 0, 00:16:40.720 "data_size": 63488 00:16:40.720 }, 00:16:40.720 { 00:16:40.720 "name": "BaseBdev3", 00:16:40.720 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:40.720 "is_configured": true, 00:16:40.720 "data_offset": 2048, 00:16:40.720 "data_size": 63488 00:16:40.720 } 00:16:40.720 ] 00:16:40.720 }' 00:16:40.720 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.720 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.978 [2024-12-09 22:57:56.823888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.978 BaseBdev1 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.978 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.236 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.236 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.236 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.237 [ 00:16:41.237 { 00:16:41.237 "name": "BaseBdev1", 00:16:41.237 "aliases": [ 00:16:41.237 "66a523bb-bda2-472c-9b1c-a81725bb88dd" 00:16:41.237 ], 00:16:41.237 "product_name": "Malloc disk", 00:16:41.237 "block_size": 512, 00:16:41.237 "num_blocks": 65536, 00:16:41.237 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:41.237 "assigned_rate_limits": { 00:16:41.237 "rw_ios_per_sec": 0, 00:16:41.237 "rw_mbytes_per_sec": 0, 00:16:41.237 "r_mbytes_per_sec": 0, 00:16:41.237 "w_mbytes_per_sec": 0 00:16:41.237 }, 00:16:41.237 "claimed": true, 00:16:41.237 "claim_type": "exclusive_write", 00:16:41.237 "zoned": false, 00:16:41.237 "supported_io_types": { 00:16:41.237 "read": true, 00:16:41.237 "write": true, 00:16:41.237 "unmap": true, 00:16:41.237 "flush": true, 00:16:41.237 "reset": true, 00:16:41.237 "nvme_admin": false, 00:16:41.237 "nvme_io": false, 00:16:41.237 "nvme_io_md": false, 00:16:41.237 "write_zeroes": true, 00:16:41.237 "zcopy": true, 00:16:41.237 "get_zone_info": false, 00:16:41.237 "zone_management": false, 00:16:41.237 "zone_append": false, 00:16:41.237 "compare": false, 00:16:41.237 "compare_and_write": false, 00:16:41.237 "abort": true, 00:16:41.237 "seek_hole": false, 00:16:41.237 "seek_data": false, 00:16:41.237 "copy": true, 00:16:41.237 "nvme_iov_md": false 00:16:41.237 }, 00:16:41.237 "memory_domains": [ 00:16:41.237 { 00:16:41.237 "dma_device_id": "system", 00:16:41.237 "dma_device_type": 1 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.237 "dma_device_type": 2 00:16:41.237 } 00:16:41.237 ], 00:16:41.237 "driver_specific": {} 00:16:41.237 } 00:16:41.237 ] 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.237 "name": "Existed_Raid", 00:16:41.237 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:41.237 "strip_size_kb": 0, 00:16:41.237 "state": "configuring", 00:16:41.237 "raid_level": "raid1", 00:16:41.237 "superblock": true, 00:16:41.237 "num_base_bdevs": 3, 00:16:41.237 "num_base_bdevs_discovered": 2, 00:16:41.237 "num_base_bdevs_operational": 3, 00:16:41.237 "base_bdevs_list": [ 00:16:41.237 { 00:16:41.237 "name": "BaseBdev1", 00:16:41.237 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:41.237 "is_configured": true, 00:16:41.237 "data_offset": 2048, 00:16:41.237 "data_size": 63488 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "name": null, 00:16:41.237 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:41.237 "is_configured": false, 00:16:41.237 "data_offset": 0, 00:16:41.237 "data_size": 63488 00:16:41.237 }, 00:16:41.237 { 00:16:41.237 "name": "BaseBdev3", 00:16:41.237 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:41.237 "is_configured": true, 00:16:41.237 "data_offset": 2048, 00:16:41.237 "data_size": 63488 00:16:41.237 } 00:16:41.237 ] 00:16:41.237 }' 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.237 22:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.495 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.495 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.495 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.495 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.495 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.754 [2024-12-09 22:57:57.395068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.754 "name": "Existed_Raid", 00:16:41.754 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:41.754 "strip_size_kb": 0, 00:16:41.754 "state": "configuring", 00:16:41.754 "raid_level": "raid1", 00:16:41.754 "superblock": true, 00:16:41.754 "num_base_bdevs": 3, 00:16:41.754 "num_base_bdevs_discovered": 1, 00:16:41.754 "num_base_bdevs_operational": 3, 00:16:41.754 "base_bdevs_list": [ 00:16:41.754 { 00:16:41.754 "name": "BaseBdev1", 00:16:41.754 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:41.754 "is_configured": true, 00:16:41.754 "data_offset": 2048, 00:16:41.754 "data_size": 63488 00:16:41.754 }, 00:16:41.754 { 00:16:41.754 "name": null, 00:16:41.754 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:41.754 "is_configured": false, 00:16:41.754 "data_offset": 0, 00:16:41.754 "data_size": 63488 00:16:41.754 }, 00:16:41.754 { 00:16:41.754 "name": null, 00:16:41.754 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:41.754 "is_configured": false, 00:16:41.754 "data_offset": 0, 00:16:41.754 "data_size": 63488 00:16:41.754 } 00:16:41.754 ] 00:16:41.754 }' 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.754 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.013 [2024-12-09 22:57:57.858331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.013 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.271 "name": "Existed_Raid", 00:16:42.271 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:42.271 "strip_size_kb": 0, 00:16:42.271 "state": "configuring", 00:16:42.271 "raid_level": "raid1", 00:16:42.271 "superblock": true, 00:16:42.271 "num_base_bdevs": 3, 00:16:42.271 "num_base_bdevs_discovered": 2, 00:16:42.271 "num_base_bdevs_operational": 3, 00:16:42.271 "base_bdevs_list": [ 00:16:42.271 { 00:16:42.271 "name": "BaseBdev1", 00:16:42.271 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:42.271 "is_configured": true, 00:16:42.271 "data_offset": 2048, 00:16:42.271 "data_size": 63488 00:16:42.271 }, 00:16:42.271 { 00:16:42.271 "name": null, 00:16:42.271 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:42.271 "is_configured": false, 00:16:42.271 "data_offset": 0, 00:16:42.271 "data_size": 63488 00:16:42.271 }, 00:16:42.271 { 00:16:42.271 "name": "BaseBdev3", 00:16:42.271 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:42.271 "is_configured": true, 00:16:42.271 "data_offset": 2048, 00:16:42.271 "data_size": 63488 00:16:42.271 } 00:16:42.271 ] 00:16:42.271 }' 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.271 22:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.529 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.529 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.529 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.529 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.530 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.530 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:42.530 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:42.530 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.530 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.530 [2024-12-09 22:57:58.353674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.788 "name": "Existed_Raid", 00:16:42.788 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:42.788 "strip_size_kb": 0, 00:16:42.788 "state": "configuring", 00:16:42.788 "raid_level": "raid1", 00:16:42.788 "superblock": true, 00:16:42.788 "num_base_bdevs": 3, 00:16:42.788 "num_base_bdevs_discovered": 1, 00:16:42.788 "num_base_bdevs_operational": 3, 00:16:42.788 "base_bdevs_list": [ 00:16:42.788 { 00:16:42.788 "name": null, 00:16:42.788 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:42.788 "is_configured": false, 00:16:42.788 "data_offset": 0, 00:16:42.788 "data_size": 63488 00:16:42.788 }, 00:16:42.788 { 00:16:42.788 "name": null, 00:16:42.788 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:42.788 "is_configured": false, 00:16:42.788 "data_offset": 0, 00:16:42.788 "data_size": 63488 00:16:42.788 }, 00:16:42.788 { 00:16:42.788 "name": "BaseBdev3", 00:16:42.788 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:42.788 "is_configured": true, 00:16:42.788 "data_offset": 2048, 00:16:42.788 "data_size": 63488 00:16:42.788 } 00:16:42.788 ] 00:16:42.788 }' 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.788 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 [2024-12-09 22:57:58.985657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.357 22:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.357 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.357 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.357 "name": "Existed_Raid", 00:16:43.357 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:43.357 "strip_size_kb": 0, 00:16:43.357 "state": "configuring", 00:16:43.357 "raid_level": "raid1", 00:16:43.357 "superblock": true, 00:16:43.357 "num_base_bdevs": 3, 00:16:43.357 "num_base_bdevs_discovered": 2, 00:16:43.357 "num_base_bdevs_operational": 3, 00:16:43.357 "base_bdevs_list": [ 00:16:43.357 { 00:16:43.357 "name": null, 00:16:43.357 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:43.357 "is_configured": false, 00:16:43.357 "data_offset": 0, 00:16:43.357 "data_size": 63488 00:16:43.357 }, 00:16:43.357 { 00:16:43.357 "name": "BaseBdev2", 00:16:43.357 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:43.357 "is_configured": true, 00:16:43.357 "data_offset": 2048, 00:16:43.357 "data_size": 63488 00:16:43.357 }, 00:16:43.357 { 00:16:43.357 "name": "BaseBdev3", 00:16:43.357 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:43.357 "is_configured": true, 00:16:43.357 "data_offset": 2048, 00:16:43.357 "data_size": 63488 00:16:43.357 } 00:16:43.357 ] 00:16:43.357 }' 00:16:43.357 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.357 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:43.640 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.640 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.640 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 66a523bb-bda2-472c-9b1c-a81725bb88dd 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.910 [2024-12-09 22:57:59.610130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:43.910 [2024-12-09 22:57:59.610456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:43.910 [2024-12-09 22:57:59.610496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:43.910 [2024-12-09 22:57:59.610881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:43.910 [2024-12-09 22:57:59.611086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:43.910 [2024-12-09 22:57:59.611106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:43.910 NewBaseBdev 00:16:43.910 [2024-12-09 22:57:59.611281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.910 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.910 [ 00:16:43.910 { 00:16:43.910 "name": "NewBaseBdev", 00:16:43.910 "aliases": [ 00:16:43.910 "66a523bb-bda2-472c-9b1c-a81725bb88dd" 00:16:43.910 ], 00:16:43.910 "product_name": "Malloc disk", 00:16:43.910 "block_size": 512, 00:16:43.910 "num_blocks": 65536, 00:16:43.910 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:43.910 "assigned_rate_limits": { 00:16:43.910 "rw_ios_per_sec": 0, 00:16:43.910 "rw_mbytes_per_sec": 0, 00:16:43.910 "r_mbytes_per_sec": 0, 00:16:43.910 "w_mbytes_per_sec": 0 00:16:43.910 }, 00:16:43.910 "claimed": true, 00:16:43.911 "claim_type": "exclusive_write", 00:16:43.911 "zoned": false, 00:16:43.911 "supported_io_types": { 00:16:43.911 "read": true, 00:16:43.911 "write": true, 00:16:43.911 "unmap": true, 00:16:43.911 "flush": true, 00:16:43.911 "reset": true, 00:16:43.911 "nvme_admin": false, 00:16:43.911 "nvme_io": false, 00:16:43.911 "nvme_io_md": false, 00:16:43.911 "write_zeroes": true, 00:16:43.911 "zcopy": true, 00:16:43.911 "get_zone_info": false, 00:16:43.911 "zone_management": false, 00:16:43.911 "zone_append": false, 00:16:43.911 "compare": false, 00:16:43.911 "compare_and_write": false, 00:16:43.911 "abort": true, 00:16:43.911 "seek_hole": false, 00:16:43.911 "seek_data": false, 00:16:43.911 "copy": true, 00:16:43.911 "nvme_iov_md": false 00:16:43.911 }, 00:16:43.911 "memory_domains": [ 00:16:43.911 { 00:16:43.911 "dma_device_id": "system", 00:16:43.911 "dma_device_type": 1 00:16:43.911 }, 00:16:43.911 { 00:16:43.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.911 "dma_device_type": 2 00:16:43.911 } 00:16:43.911 ], 00:16:43.911 "driver_specific": {} 00:16:43.911 } 00:16:43.911 ] 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.911 "name": "Existed_Raid", 00:16:43.911 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:43.911 "strip_size_kb": 0, 00:16:43.911 "state": "online", 00:16:43.911 "raid_level": "raid1", 00:16:43.911 "superblock": true, 00:16:43.911 "num_base_bdevs": 3, 00:16:43.911 "num_base_bdevs_discovered": 3, 00:16:43.911 "num_base_bdevs_operational": 3, 00:16:43.911 "base_bdevs_list": [ 00:16:43.911 { 00:16:43.911 "name": "NewBaseBdev", 00:16:43.911 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:43.911 "is_configured": true, 00:16:43.911 "data_offset": 2048, 00:16:43.911 "data_size": 63488 00:16:43.911 }, 00:16:43.911 { 00:16:43.911 "name": "BaseBdev2", 00:16:43.911 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:43.911 "is_configured": true, 00:16:43.911 "data_offset": 2048, 00:16:43.911 "data_size": 63488 00:16:43.911 }, 00:16:43.911 { 00:16:43.911 "name": "BaseBdev3", 00:16:43.911 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:43.911 "is_configured": true, 00:16:43.911 "data_offset": 2048, 00:16:43.911 "data_size": 63488 00:16:43.911 } 00:16:43.911 ] 00:16:43.911 }' 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.911 22:57:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.478 [2024-12-09 22:58:00.105812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.478 "name": "Existed_Raid", 00:16:44.478 "aliases": [ 00:16:44.478 "3e394f17-311b-4925-ad8c-1f91bb7c8c72" 00:16:44.478 ], 00:16:44.478 "product_name": "Raid Volume", 00:16:44.478 "block_size": 512, 00:16:44.478 "num_blocks": 63488, 00:16:44.478 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:44.478 "assigned_rate_limits": { 00:16:44.478 "rw_ios_per_sec": 0, 00:16:44.478 "rw_mbytes_per_sec": 0, 00:16:44.478 "r_mbytes_per_sec": 0, 00:16:44.478 "w_mbytes_per_sec": 0 00:16:44.478 }, 00:16:44.478 "claimed": false, 00:16:44.478 "zoned": false, 00:16:44.478 "supported_io_types": { 00:16:44.478 "read": true, 00:16:44.478 "write": true, 00:16:44.478 "unmap": false, 00:16:44.478 "flush": false, 00:16:44.478 "reset": true, 00:16:44.478 "nvme_admin": false, 00:16:44.478 "nvme_io": false, 00:16:44.478 "nvme_io_md": false, 00:16:44.478 "write_zeroes": true, 00:16:44.478 "zcopy": false, 00:16:44.478 "get_zone_info": false, 00:16:44.478 "zone_management": false, 00:16:44.478 "zone_append": false, 00:16:44.478 "compare": false, 00:16:44.478 "compare_and_write": false, 00:16:44.478 "abort": false, 00:16:44.478 "seek_hole": false, 00:16:44.478 "seek_data": false, 00:16:44.478 "copy": false, 00:16:44.478 "nvme_iov_md": false 00:16:44.478 }, 00:16:44.478 "memory_domains": [ 00:16:44.478 { 00:16:44.478 "dma_device_id": "system", 00:16:44.478 "dma_device_type": 1 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.478 "dma_device_type": 2 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "dma_device_id": "system", 00:16:44.478 "dma_device_type": 1 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.478 "dma_device_type": 2 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "dma_device_id": "system", 00:16:44.478 "dma_device_type": 1 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.478 "dma_device_type": 2 00:16:44.478 } 00:16:44.478 ], 00:16:44.478 "driver_specific": { 00:16:44.478 "raid": { 00:16:44.478 "uuid": "3e394f17-311b-4925-ad8c-1f91bb7c8c72", 00:16:44.478 "strip_size_kb": 0, 00:16:44.478 "state": "online", 00:16:44.478 "raid_level": "raid1", 00:16:44.478 "superblock": true, 00:16:44.478 "num_base_bdevs": 3, 00:16:44.478 "num_base_bdevs_discovered": 3, 00:16:44.478 "num_base_bdevs_operational": 3, 00:16:44.478 "base_bdevs_list": [ 00:16:44.478 { 00:16:44.478 "name": "NewBaseBdev", 00:16:44.478 "uuid": "66a523bb-bda2-472c-9b1c-a81725bb88dd", 00:16:44.478 "is_configured": true, 00:16:44.478 "data_offset": 2048, 00:16:44.478 "data_size": 63488 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "name": "BaseBdev2", 00:16:44.478 "uuid": "1b9589ff-5fc4-4cd0-92b0-81030be87b37", 00:16:44.478 "is_configured": true, 00:16:44.478 "data_offset": 2048, 00:16:44.478 "data_size": 63488 00:16:44.478 }, 00:16:44.478 { 00:16:44.478 "name": "BaseBdev3", 00:16:44.478 "uuid": "62e6025b-eff0-4d55-b10d-72facf413978", 00:16:44.478 "is_configured": true, 00:16:44.478 "data_offset": 2048, 00:16:44.478 "data_size": 63488 00:16:44.478 } 00:16:44.478 ] 00:16:44.478 } 00:16:44.478 } 00:16:44.478 }' 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.478 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:44.478 BaseBdev2 00:16:44.478 BaseBdev3' 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.479 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.737 [2024-12-09 22:58:00.400944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.737 [2024-12-09 22:58:00.401004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.737 [2024-12-09 22:58:00.401136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.737 [2024-12-09 22:58:00.401537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.737 [2024-12-09 22:58:00.401553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68554 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68554 ']' 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68554 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68554 00:16:44.737 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.738 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.738 killing process with pid 68554 00:16:44.738 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68554' 00:16:44.738 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68554 00:16:44.738 [2024-12-09 22:58:00.451033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.738 22:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68554 00:16:44.996 [2024-12-09 22:58:00.833967] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.898 22:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:46.898 00:16:46.898 real 0m11.356s 00:16:46.898 user 0m17.535s 00:16:46.898 sys 0m2.104s 00:16:46.898 22:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.898 22:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.898 ************************************ 00:16:46.898 END TEST raid_state_function_test_sb 00:16:46.898 ************************************ 00:16:46.898 22:58:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:46.898 22:58:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:46.898 22:58:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.898 22:58:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.898 ************************************ 00:16:46.898 START TEST raid_superblock_test 00:16:46.898 ************************************ 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69181 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69181 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69181 ']' 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.898 22:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.898 [2024-12-09 22:58:02.421632] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:46.898 [2024-12-09 22:58:02.421760] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69181 ] 00:16:46.898 [2024-12-09 22:58:02.591182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.157 [2024-12-09 22:58:02.764293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.416 [2024-12-09 22:58:03.041941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.416 [2024-12-09 22:58:03.042027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.676 malloc1 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.676 [2024-12-09 22:58:03.423737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.676 [2024-12-09 22:58:03.423823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.676 [2024-12-09 22:58:03.423852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.676 [2024-12-09 22:58:03.423864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.676 [2024-12-09 22:58:03.426810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.676 [2024-12-09 22:58:03.426859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.676 pt1 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.676 malloc2 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.676 [2024-12-09 22:58:03.492265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.676 [2024-12-09 22:58:03.492344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.676 [2024-12-09 22:58:03.492372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.676 [2024-12-09 22:58:03.492384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.676 [2024-12-09 22:58:03.495307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.676 [2024-12-09 22:58:03.495351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.676 pt2 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.676 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.937 malloc3 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.937 [2024-12-09 22:58:03.570352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:47.937 [2024-12-09 22:58:03.570428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.937 [2024-12-09 22:58:03.570455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.937 [2024-12-09 22:58:03.570482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.937 [2024-12-09 22:58:03.573201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.937 [2024-12-09 22:58:03.573243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:47.937 pt3 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.937 [2024-12-09 22:58:03.578383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.937 [2024-12-09 22:58:03.580771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.937 [2024-12-09 22:58:03.580855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:47.937 [2024-12-09 22:58:03.581052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.937 [2024-12-09 22:58:03.581082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:47.937 [2024-12-09 22:58:03.581366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:47.937 [2024-12-09 22:58:03.581595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.937 [2024-12-09 22:58:03.581617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.937 [2024-12-09 22:58:03.581817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.937 "name": "raid_bdev1", 00:16:47.937 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:47.937 "strip_size_kb": 0, 00:16:47.937 "state": "online", 00:16:47.937 "raid_level": "raid1", 00:16:47.937 "superblock": true, 00:16:47.937 "num_base_bdevs": 3, 00:16:47.937 "num_base_bdevs_discovered": 3, 00:16:47.937 "num_base_bdevs_operational": 3, 00:16:47.937 "base_bdevs_list": [ 00:16:47.937 { 00:16:47.937 "name": "pt1", 00:16:47.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.937 "is_configured": true, 00:16:47.937 "data_offset": 2048, 00:16:47.937 "data_size": 63488 00:16:47.937 }, 00:16:47.937 { 00:16:47.937 "name": "pt2", 00:16:47.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.937 "is_configured": true, 00:16:47.937 "data_offset": 2048, 00:16:47.937 "data_size": 63488 00:16:47.937 }, 00:16:47.937 { 00:16:47.937 "name": "pt3", 00:16:47.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.937 "is_configured": true, 00:16:47.937 "data_offset": 2048, 00:16:47.937 "data_size": 63488 00:16:47.937 } 00:16:47.937 ] 00:16:47.937 }' 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.937 22:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.512 [2024-12-09 22:58:04.101955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.512 "name": "raid_bdev1", 00:16:48.512 "aliases": [ 00:16:48.512 "d883844c-6e9f-4568-af8e-f218790a259e" 00:16:48.512 ], 00:16:48.512 "product_name": "Raid Volume", 00:16:48.512 "block_size": 512, 00:16:48.512 "num_blocks": 63488, 00:16:48.512 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:48.512 "assigned_rate_limits": { 00:16:48.512 "rw_ios_per_sec": 0, 00:16:48.512 "rw_mbytes_per_sec": 0, 00:16:48.512 "r_mbytes_per_sec": 0, 00:16:48.512 "w_mbytes_per_sec": 0 00:16:48.512 }, 00:16:48.512 "claimed": false, 00:16:48.512 "zoned": false, 00:16:48.512 "supported_io_types": { 00:16:48.512 "read": true, 00:16:48.512 "write": true, 00:16:48.512 "unmap": false, 00:16:48.512 "flush": false, 00:16:48.512 "reset": true, 00:16:48.512 "nvme_admin": false, 00:16:48.512 "nvme_io": false, 00:16:48.512 "nvme_io_md": false, 00:16:48.512 "write_zeroes": true, 00:16:48.512 "zcopy": false, 00:16:48.512 "get_zone_info": false, 00:16:48.512 "zone_management": false, 00:16:48.512 "zone_append": false, 00:16:48.512 "compare": false, 00:16:48.512 "compare_and_write": false, 00:16:48.512 "abort": false, 00:16:48.512 "seek_hole": false, 00:16:48.512 "seek_data": false, 00:16:48.512 "copy": false, 00:16:48.512 "nvme_iov_md": false 00:16:48.512 }, 00:16:48.512 "memory_domains": [ 00:16:48.512 { 00:16:48.512 "dma_device_id": "system", 00:16:48.512 "dma_device_type": 1 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.512 "dma_device_type": 2 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "dma_device_id": "system", 00:16:48.512 "dma_device_type": 1 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.512 "dma_device_type": 2 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "dma_device_id": "system", 00:16:48.512 "dma_device_type": 1 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.512 "dma_device_type": 2 00:16:48.512 } 00:16:48.512 ], 00:16:48.512 "driver_specific": { 00:16:48.512 "raid": { 00:16:48.512 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:48.512 "strip_size_kb": 0, 00:16:48.512 "state": "online", 00:16:48.512 "raid_level": "raid1", 00:16:48.512 "superblock": true, 00:16:48.512 "num_base_bdevs": 3, 00:16:48.512 "num_base_bdevs_discovered": 3, 00:16:48.512 "num_base_bdevs_operational": 3, 00:16:48.512 "base_bdevs_list": [ 00:16:48.512 { 00:16:48.512 "name": "pt1", 00:16:48.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.512 "is_configured": true, 00:16:48.512 "data_offset": 2048, 00:16:48.512 "data_size": 63488 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "name": "pt2", 00:16:48.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.512 "is_configured": true, 00:16:48.512 "data_offset": 2048, 00:16:48.512 "data_size": 63488 00:16:48.512 }, 00:16:48.512 { 00:16:48.512 "name": "pt3", 00:16:48.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.512 "is_configured": true, 00:16:48.512 "data_offset": 2048, 00:16:48.512 "data_size": 63488 00:16:48.512 } 00:16:48.512 ] 00:16:48.512 } 00:16:48.512 } 00:16:48.512 }' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:48.512 pt2 00:16:48.512 pt3' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.512 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.513 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:48.513 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.513 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:48.773 [2024-12-09 22:58:04.389383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d883844c-6e9f-4568-af8e-f218790a259e 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d883844c-6e9f-4568-af8e-f218790a259e ']' 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 [2024-12-09 22:58:04.444949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.773 [2024-12-09 22:58:04.445000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.773 [2024-12-09 22:58:04.445119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.773 [2024-12-09 22:58:04.445224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.773 [2024-12-09 22:58:04.445243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.773 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 [2024-12-09 22:58:04.588809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:48.773 [2024-12-09 22:58:04.591368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:48.773 [2024-12-09 22:58:04.591443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:48.773 [2024-12-09 22:58:04.591530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:48.773 [2024-12-09 22:58:04.591595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:48.774 [2024-12-09 22:58:04.591618] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:48.774 [2024-12-09 22:58:04.591636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.774 [2024-12-09 22:58:04.591648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:48.774 request: 00:16:48.774 { 00:16:48.774 "name": "raid_bdev1", 00:16:48.774 "raid_level": "raid1", 00:16:48.774 "base_bdevs": [ 00:16:48.774 "malloc1", 00:16:48.774 "malloc2", 00:16:48.774 "malloc3" 00:16:48.774 ], 00:16:48.774 "superblock": false, 00:16:48.774 "method": "bdev_raid_create", 00:16:48.774 "req_id": 1 00:16:48.774 } 00:16:48.774 Got JSON-RPC error response 00:16:48.774 response: 00:16:48.774 { 00:16:48.774 "code": -17, 00:16:48.774 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:48.774 } 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.774 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.033 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:49.033 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:49.033 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.033 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.033 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.033 [2024-12-09 22:58:04.644648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.034 [2024-12-09 22:58:04.644735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.034 [2024-12-09 22:58:04.644762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:49.034 [2024-12-09 22:58:04.644778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.034 [2024-12-09 22:58:04.647592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.034 [2024-12-09 22:58:04.647630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.034 [2024-12-09 22:58:04.647733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:49.034 [2024-12-09 22:58:04.647807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.034 pt1 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.034 "name": "raid_bdev1", 00:16:49.034 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:49.034 "strip_size_kb": 0, 00:16:49.034 "state": "configuring", 00:16:49.034 "raid_level": "raid1", 00:16:49.034 "superblock": true, 00:16:49.034 "num_base_bdevs": 3, 00:16:49.034 "num_base_bdevs_discovered": 1, 00:16:49.034 "num_base_bdevs_operational": 3, 00:16:49.034 "base_bdevs_list": [ 00:16:49.034 { 00:16:49.034 "name": "pt1", 00:16:49.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.034 "is_configured": true, 00:16:49.034 "data_offset": 2048, 00:16:49.034 "data_size": 63488 00:16:49.034 }, 00:16:49.034 { 00:16:49.034 "name": null, 00:16:49.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.034 "is_configured": false, 00:16:49.034 "data_offset": 2048, 00:16:49.034 "data_size": 63488 00:16:49.034 }, 00:16:49.034 { 00:16:49.034 "name": null, 00:16:49.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.034 "is_configured": false, 00:16:49.034 "data_offset": 2048, 00:16:49.034 "data_size": 63488 00:16:49.034 } 00:16:49.034 ] 00:16:49.034 }' 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.034 22:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.293 [2024-12-09 22:58:05.107976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.293 [2024-12-09 22:58:05.108073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.293 [2024-12-09 22:58:05.108103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:49.293 [2024-12-09 22:58:05.108118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.293 [2024-12-09 22:58:05.108739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.293 [2024-12-09 22:58:05.108772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.293 [2024-12-09 22:58:05.108893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.293 [2024-12-09 22:58:05.108930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.293 pt2 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.293 [2024-12-09 22:58:05.115941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.293 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.294 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.553 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.553 "name": "raid_bdev1", 00:16:49.553 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:49.553 "strip_size_kb": 0, 00:16:49.553 "state": "configuring", 00:16:49.553 "raid_level": "raid1", 00:16:49.553 "superblock": true, 00:16:49.553 "num_base_bdevs": 3, 00:16:49.553 "num_base_bdevs_discovered": 1, 00:16:49.553 "num_base_bdevs_operational": 3, 00:16:49.553 "base_bdevs_list": [ 00:16:49.553 { 00:16:49.553 "name": "pt1", 00:16:49.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.553 "is_configured": true, 00:16:49.553 "data_offset": 2048, 00:16:49.553 "data_size": 63488 00:16:49.553 }, 00:16:49.553 { 00:16:49.553 "name": null, 00:16:49.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.553 "is_configured": false, 00:16:49.553 "data_offset": 0, 00:16:49.553 "data_size": 63488 00:16:49.553 }, 00:16:49.553 { 00:16:49.553 "name": null, 00:16:49.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.553 "is_configured": false, 00:16:49.553 "data_offset": 2048, 00:16:49.553 "data_size": 63488 00:16:49.553 } 00:16:49.553 ] 00:16:49.553 }' 00:16:49.553 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.553 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.812 [2024-12-09 22:58:05.635070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.812 [2024-12-09 22:58:05.635176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.812 [2024-12-09 22:58:05.635203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:49.812 [2024-12-09 22:58:05.635219] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.812 [2024-12-09 22:58:05.635855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.812 [2024-12-09 22:58:05.635887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.812 [2024-12-09 22:58:05.636020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.812 [2024-12-09 22:58:05.636080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.812 pt2 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.812 [2024-12-09 22:58:05.643011] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:49.812 [2024-12-09 22:58:05.643075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.812 [2024-12-09 22:58:05.643094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:49.812 [2024-12-09 22:58:05.643107] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.812 [2024-12-09 22:58:05.643620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.812 [2024-12-09 22:58:05.643657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:49.812 [2024-12-09 22:58:05.643739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:49.812 [2024-12-09 22:58:05.643772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:49.812 [2024-12-09 22:58:05.643945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:49.812 [2024-12-09 22:58:05.643969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:49.812 [2024-12-09 22:58:05.644278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.812 [2024-12-09 22:58:05.644519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:49.812 [2024-12-09 22:58:05.644536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:49.812 [2024-12-09 22:58:05.644710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.812 pt3 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.812 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.072 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.072 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.072 "name": "raid_bdev1", 00:16:50.072 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:50.072 "strip_size_kb": 0, 00:16:50.072 "state": "online", 00:16:50.072 "raid_level": "raid1", 00:16:50.072 "superblock": true, 00:16:50.072 "num_base_bdevs": 3, 00:16:50.072 "num_base_bdevs_discovered": 3, 00:16:50.072 "num_base_bdevs_operational": 3, 00:16:50.072 "base_bdevs_list": [ 00:16:50.072 { 00:16:50.072 "name": "pt1", 00:16:50.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.072 "is_configured": true, 00:16:50.072 "data_offset": 2048, 00:16:50.072 "data_size": 63488 00:16:50.072 }, 00:16:50.072 { 00:16:50.072 "name": "pt2", 00:16:50.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.072 "is_configured": true, 00:16:50.072 "data_offset": 2048, 00:16:50.072 "data_size": 63488 00:16:50.072 }, 00:16:50.072 { 00:16:50.072 "name": "pt3", 00:16:50.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.072 "is_configured": true, 00:16:50.072 "data_offset": 2048, 00:16:50.072 "data_size": 63488 00:16:50.072 } 00:16:50.072 ] 00:16:50.072 }' 00:16:50.072 22:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.072 22:58:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.331 [2024-12-09 22:58:06.142719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.331 "name": "raid_bdev1", 00:16:50.331 "aliases": [ 00:16:50.331 "d883844c-6e9f-4568-af8e-f218790a259e" 00:16:50.331 ], 00:16:50.331 "product_name": "Raid Volume", 00:16:50.331 "block_size": 512, 00:16:50.331 "num_blocks": 63488, 00:16:50.331 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:50.331 "assigned_rate_limits": { 00:16:50.331 "rw_ios_per_sec": 0, 00:16:50.331 "rw_mbytes_per_sec": 0, 00:16:50.331 "r_mbytes_per_sec": 0, 00:16:50.331 "w_mbytes_per_sec": 0 00:16:50.331 }, 00:16:50.331 "claimed": false, 00:16:50.331 "zoned": false, 00:16:50.331 "supported_io_types": { 00:16:50.331 "read": true, 00:16:50.331 "write": true, 00:16:50.331 "unmap": false, 00:16:50.331 "flush": false, 00:16:50.331 "reset": true, 00:16:50.331 "nvme_admin": false, 00:16:50.331 "nvme_io": false, 00:16:50.331 "nvme_io_md": false, 00:16:50.331 "write_zeroes": true, 00:16:50.331 "zcopy": false, 00:16:50.331 "get_zone_info": false, 00:16:50.331 "zone_management": false, 00:16:50.331 "zone_append": false, 00:16:50.331 "compare": false, 00:16:50.331 "compare_and_write": false, 00:16:50.331 "abort": false, 00:16:50.331 "seek_hole": false, 00:16:50.331 "seek_data": false, 00:16:50.331 "copy": false, 00:16:50.331 "nvme_iov_md": false 00:16:50.331 }, 00:16:50.331 "memory_domains": [ 00:16:50.331 { 00:16:50.331 "dma_device_id": "system", 00:16:50.331 "dma_device_type": 1 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.331 "dma_device_type": 2 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "dma_device_id": "system", 00:16:50.331 "dma_device_type": 1 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.331 "dma_device_type": 2 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "dma_device_id": "system", 00:16:50.331 "dma_device_type": 1 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.331 "dma_device_type": 2 00:16:50.331 } 00:16:50.331 ], 00:16:50.331 "driver_specific": { 00:16:50.331 "raid": { 00:16:50.331 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:50.331 "strip_size_kb": 0, 00:16:50.331 "state": "online", 00:16:50.331 "raid_level": "raid1", 00:16:50.331 "superblock": true, 00:16:50.331 "num_base_bdevs": 3, 00:16:50.331 "num_base_bdevs_discovered": 3, 00:16:50.331 "num_base_bdevs_operational": 3, 00:16:50.331 "base_bdevs_list": [ 00:16:50.331 { 00:16:50.331 "name": "pt1", 00:16:50.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.331 "is_configured": true, 00:16:50.331 "data_offset": 2048, 00:16:50.331 "data_size": 63488 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "name": "pt2", 00:16:50.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.331 "is_configured": true, 00:16:50.331 "data_offset": 2048, 00:16:50.331 "data_size": 63488 00:16:50.331 }, 00:16:50.331 { 00:16:50.331 "name": "pt3", 00:16:50.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.331 "is_configured": true, 00:16:50.331 "data_offset": 2048, 00:16:50.331 "data_size": 63488 00:16:50.331 } 00:16:50.331 ] 00:16:50.331 } 00:16:50.331 } 00:16:50.331 }' 00:16:50.331 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.591 pt2 00:16:50.591 pt3' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:50.591 [2024-12-09 22:58:06.402221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d883844c-6e9f-4568-af8e-f218790a259e '!=' d883844c-6e9f-4568-af8e-f218790a259e ']' 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.591 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.851 [2024-12-09 22:58:06.449885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.851 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.852 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.852 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.852 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.852 "name": "raid_bdev1", 00:16:50.852 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:50.852 "strip_size_kb": 0, 00:16:50.852 "state": "online", 00:16:50.852 "raid_level": "raid1", 00:16:50.852 "superblock": true, 00:16:50.852 "num_base_bdevs": 3, 00:16:50.852 "num_base_bdevs_discovered": 2, 00:16:50.852 "num_base_bdevs_operational": 2, 00:16:50.852 "base_bdevs_list": [ 00:16:50.852 { 00:16:50.852 "name": null, 00:16:50.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.852 "is_configured": false, 00:16:50.852 "data_offset": 0, 00:16:50.852 "data_size": 63488 00:16:50.852 }, 00:16:50.852 { 00:16:50.852 "name": "pt2", 00:16:50.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.852 "is_configured": true, 00:16:50.852 "data_offset": 2048, 00:16:50.852 "data_size": 63488 00:16:50.852 }, 00:16:50.852 { 00:16:50.852 "name": "pt3", 00:16:50.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.852 "is_configured": true, 00:16:50.852 "data_offset": 2048, 00:16:50.852 "data_size": 63488 00:16:50.852 } 00:16:50.852 ] 00:16:50.852 }' 00:16:50.852 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.852 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.110 [2024-12-09 22:58:06.921129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.110 [2024-12-09 22:58:06.921181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.110 [2024-12-09 22:58:06.921315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.110 [2024-12-09 22:58:06.921394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.110 [2024-12-09 22:58:06.921412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.110 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 22:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 [2024-12-09 22:58:07.008929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.368 [2024-12-09 22:58:07.009021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.368 [2024-12-09 22:58:07.009044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:51.368 [2024-12-09 22:58:07.009059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.368 [2024-12-09 22:58:07.012144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.368 [2024-12-09 22:58:07.012206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.368 [2024-12-09 22:58:07.012330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.368 [2024-12-09 22:58:07.012404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.368 pt2 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.368 "name": "raid_bdev1", 00:16:51.368 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:51.368 "strip_size_kb": 0, 00:16:51.368 "state": "configuring", 00:16:51.368 "raid_level": "raid1", 00:16:51.368 "superblock": true, 00:16:51.368 "num_base_bdevs": 3, 00:16:51.368 "num_base_bdevs_discovered": 1, 00:16:51.368 "num_base_bdevs_operational": 2, 00:16:51.368 "base_bdevs_list": [ 00:16:51.368 { 00:16:51.368 "name": null, 00:16:51.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.368 "is_configured": false, 00:16:51.368 "data_offset": 2048, 00:16:51.368 "data_size": 63488 00:16:51.368 }, 00:16:51.368 { 00:16:51.368 "name": "pt2", 00:16:51.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.368 "is_configured": true, 00:16:51.368 "data_offset": 2048, 00:16:51.368 "data_size": 63488 00:16:51.368 }, 00:16:51.368 { 00:16:51.368 "name": null, 00:16:51.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.368 "is_configured": false, 00:16:51.368 "data_offset": 2048, 00:16:51.368 "data_size": 63488 00:16:51.368 } 00:16:51.368 ] 00:16:51.368 }' 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.368 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.625 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:51.625 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:51.625 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:51.625 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:51.625 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.625 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.625 [2024-12-09 22:58:07.468270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:51.625 [2024-12-09 22:58:07.468378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.625 [2024-12-09 22:58:07.468415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:51.625 [2024-12-09 22:58:07.468433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.625 [2024-12-09 22:58:07.469092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.625 [2024-12-09 22:58:07.469129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:51.626 [2024-12-09 22:58:07.469265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:51.626 [2024-12-09 22:58:07.469310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:51.626 [2024-12-09 22:58:07.469475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:51.626 [2024-12-09 22:58:07.469493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.626 [2024-12-09 22:58:07.469831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.626 [2024-12-09 22:58:07.470034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:51.626 [2024-12-09 22:58:07.470050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:51.626 [2024-12-09 22:58:07.470219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.626 pt3 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.626 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.883 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.883 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.883 "name": "raid_bdev1", 00:16:51.883 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:51.883 "strip_size_kb": 0, 00:16:51.883 "state": "online", 00:16:51.883 "raid_level": "raid1", 00:16:51.883 "superblock": true, 00:16:51.883 "num_base_bdevs": 3, 00:16:51.883 "num_base_bdevs_discovered": 2, 00:16:51.883 "num_base_bdevs_operational": 2, 00:16:51.883 "base_bdevs_list": [ 00:16:51.883 { 00:16:51.883 "name": null, 00:16:51.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.883 "is_configured": false, 00:16:51.883 "data_offset": 2048, 00:16:51.883 "data_size": 63488 00:16:51.883 }, 00:16:51.883 { 00:16:51.883 "name": "pt2", 00:16:51.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.883 "is_configured": true, 00:16:51.883 "data_offset": 2048, 00:16:51.883 "data_size": 63488 00:16:51.883 }, 00:16:51.883 { 00:16:51.883 "name": "pt3", 00:16:51.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.883 "is_configured": true, 00:16:51.883 "data_offset": 2048, 00:16:51.883 "data_size": 63488 00:16:51.883 } 00:16:51.883 ] 00:16:51.883 }' 00:16:51.883 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.883 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 [2024-12-09 22:58:07.887552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.141 [2024-12-09 22:58:07.887607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.141 [2024-12-09 22:58:07.887717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.141 [2024-12-09 22:58:07.887802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.141 [2024-12-09 22:58:07.887817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 [2024-12-09 22:58:07.959500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:52.141 [2024-12-09 22:58:07.959619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.141 [2024-12-09 22:58:07.959647] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:52.141 [2024-12-09 22:58:07.959658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.141 [2024-12-09 22:58:07.962699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.141 [2024-12-09 22:58:07.962739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:52.141 [2024-12-09 22:58:07.962875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:52.141 [2024-12-09 22:58:07.962939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:52.141 [2024-12-09 22:58:07.963107] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:52.141 [2024-12-09 22:58:07.963128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.141 [2024-12-09 22:58:07.963151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:52.141 [2024-12-09 22:58:07.963229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.141 pt1 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.141 22:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.469 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.469 "name": "raid_bdev1", 00:16:52.469 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:52.469 "strip_size_kb": 0, 00:16:52.469 "state": "configuring", 00:16:52.469 "raid_level": "raid1", 00:16:52.469 "superblock": true, 00:16:52.469 "num_base_bdevs": 3, 00:16:52.469 "num_base_bdevs_discovered": 1, 00:16:52.469 "num_base_bdevs_operational": 2, 00:16:52.469 "base_bdevs_list": [ 00:16:52.469 { 00:16:52.469 "name": null, 00:16:52.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.469 "is_configured": false, 00:16:52.469 "data_offset": 2048, 00:16:52.469 "data_size": 63488 00:16:52.469 }, 00:16:52.469 { 00:16:52.469 "name": "pt2", 00:16:52.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.469 "is_configured": true, 00:16:52.469 "data_offset": 2048, 00:16:52.469 "data_size": 63488 00:16:52.469 }, 00:16:52.469 { 00:16:52.469 "name": null, 00:16:52.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.469 "is_configured": false, 00:16:52.469 "data_offset": 2048, 00:16:52.469 "data_size": 63488 00:16:52.469 } 00:16:52.469 ] 00:16:52.469 }' 00:16:52.469 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.469 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.728 [2024-12-09 22:58:08.474602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.728 [2024-12-09 22:58:08.474707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.728 [2024-12-09 22:58:08.474740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:52.728 [2024-12-09 22:58:08.474753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.728 [2024-12-09 22:58:08.475396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.728 [2024-12-09 22:58:08.475433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.728 [2024-12-09 22:58:08.475567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:52.728 [2024-12-09 22:58:08.475609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.728 [2024-12-09 22:58:08.475771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:52.728 [2024-12-09 22:58:08.475790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:52.728 [2024-12-09 22:58:08.476106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:52.728 [2024-12-09 22:58:08.476312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:52.728 [2024-12-09 22:58:08.476339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:52.728 [2024-12-09 22:58:08.476546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.728 pt3 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.728 "name": "raid_bdev1", 00:16:52.728 "uuid": "d883844c-6e9f-4568-af8e-f218790a259e", 00:16:52.728 "strip_size_kb": 0, 00:16:52.728 "state": "online", 00:16:52.728 "raid_level": "raid1", 00:16:52.728 "superblock": true, 00:16:52.728 "num_base_bdevs": 3, 00:16:52.728 "num_base_bdevs_discovered": 2, 00:16:52.728 "num_base_bdevs_operational": 2, 00:16:52.728 "base_bdevs_list": [ 00:16:52.728 { 00:16:52.728 "name": null, 00:16:52.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.728 "is_configured": false, 00:16:52.728 "data_offset": 2048, 00:16:52.728 "data_size": 63488 00:16:52.728 }, 00:16:52.728 { 00:16:52.728 "name": "pt2", 00:16:52.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.728 "is_configured": true, 00:16:52.728 "data_offset": 2048, 00:16:52.728 "data_size": 63488 00:16:52.728 }, 00:16:52.728 { 00:16:52.728 "name": "pt3", 00:16:52.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.728 "is_configured": true, 00:16:52.728 "data_offset": 2048, 00:16:52.728 "data_size": 63488 00:16:52.728 } 00:16:52.728 ] 00:16:52.728 }' 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.728 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.295 22:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:53.295 [2024-12-09 22:58:08.994078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d883844c-6e9f-4568-af8e-f218790a259e '!=' d883844c-6e9f-4568-af8e-f218790a259e ']' 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69181 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69181 ']' 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69181 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69181 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.295 killing process with pid 69181 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69181' 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69181 00:16:53.295 [2024-12-09 22:58:09.065336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.295 22:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69181 00:16:53.295 [2024-12-09 22:58:09.065535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.295 [2024-12-09 22:58:09.065624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.295 [2024-12-09 22:58:09.065648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:53.863 [2024-12-09 22:58:09.464617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.240 22:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:55.240 00:16:55.240 real 0m8.566s 00:16:55.240 user 0m13.126s 00:16:55.240 sys 0m1.567s 00:16:55.240 22:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.241 22:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.241 ************************************ 00:16:55.241 END TEST raid_superblock_test 00:16:55.241 ************************************ 00:16:55.241 22:58:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:16:55.241 22:58:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:55.241 22:58:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.241 22:58:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.241 ************************************ 00:16:55.241 START TEST raid_read_error_test 00:16:55.241 ************************************ 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.exmwCXs8MW 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69640 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69640 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69640 ']' 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.241 22:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:55.241 [2024-12-09 22:58:11.083379] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:16:55.241 [2024-12-09 22:58:11.083545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69640 ] 00:16:55.500 [2024-12-09 22:58:11.253405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.791 [2024-12-09 22:58:11.415559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.057 [2024-12-09 22:58:11.689508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.057 [2024-12-09 22:58:11.689570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.317 22:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.317 22:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:56.317 22:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:56.317 22:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:56.317 22:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 BaseBdev1_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 true 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 [2024-12-09 22:58:12.032847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:56.317 [2024-12-09 22:58:12.032923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.317 [2024-12-09 22:58:12.032948] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:56.317 [2024-12-09 22:58:12.032962] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.317 [2024-12-09 22:58:12.035693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.317 [2024-12-09 22:58:12.035733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:56.317 BaseBdev1 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 BaseBdev2_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 true 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 [2024-12-09 22:58:12.099526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:56.317 [2024-12-09 22:58:12.099596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.317 [2024-12-09 22:58:12.099629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:56.317 [2024-12-09 22:58:12.099641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.317 [2024-12-09 22:58:12.102285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.317 [2024-12-09 22:58:12.102330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:56.317 BaseBdev2 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 BaseBdev3_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 true 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 [2024-12-09 22:58:12.174764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:56.577 [2024-12-09 22:58:12.174842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.577 [2024-12-09 22:58:12.174867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:56.577 [2024-12-09 22:58:12.174881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.577 [2024-12-09 22:58:12.177738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.577 [2024-12-09 22:58:12.177781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:56.577 BaseBdev3 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 [2024-12-09 22:58:12.182860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.577 [2024-12-09 22:58:12.185235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.577 [2024-12-09 22:58:12.185433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.577 [2024-12-09 22:58:12.185736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:56.577 [2024-12-09 22:58:12.185754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:56.577 [2024-12-09 22:58:12.186090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:56.577 [2024-12-09 22:58:12.186326] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:56.577 [2024-12-09 22:58:12.186340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:56.577 [2024-12-09 22:58:12.186541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.577 "name": "raid_bdev1", 00:16:56.577 "uuid": "e72fb67e-f9ba-4487-b09f-d1d9a911c91e", 00:16:56.577 "strip_size_kb": 0, 00:16:56.577 "state": "online", 00:16:56.577 "raid_level": "raid1", 00:16:56.577 "superblock": true, 00:16:56.577 "num_base_bdevs": 3, 00:16:56.577 "num_base_bdevs_discovered": 3, 00:16:56.577 "num_base_bdevs_operational": 3, 00:16:56.577 "base_bdevs_list": [ 00:16:56.577 { 00:16:56.577 "name": "BaseBdev1", 00:16:56.577 "uuid": "dfe422bb-0e76-5265-b116-93958969349f", 00:16:56.577 "is_configured": true, 00:16:56.577 "data_offset": 2048, 00:16:56.577 "data_size": 63488 00:16:56.577 }, 00:16:56.577 { 00:16:56.577 "name": "BaseBdev2", 00:16:56.577 "uuid": "af746b50-61a5-5510-b94f-8410885607ad", 00:16:56.577 "is_configured": true, 00:16:56.577 "data_offset": 2048, 00:16:56.577 "data_size": 63488 00:16:56.577 }, 00:16:56.577 { 00:16:56.577 "name": "BaseBdev3", 00:16:56.577 "uuid": "a30f7996-2493-5d12-8e50-81c80aebf673", 00:16:56.577 "is_configured": true, 00:16:56.577 "data_offset": 2048, 00:16:56.577 "data_size": 63488 00:16:56.577 } 00:16:56.577 ] 00:16:56.577 }' 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.577 22:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.837 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:56.837 22:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:57.095 [2024-12-09 22:58:12.751454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.032 "name": "raid_bdev1", 00:16:58.032 "uuid": "e72fb67e-f9ba-4487-b09f-d1d9a911c91e", 00:16:58.032 "strip_size_kb": 0, 00:16:58.032 "state": "online", 00:16:58.032 "raid_level": "raid1", 00:16:58.032 "superblock": true, 00:16:58.032 "num_base_bdevs": 3, 00:16:58.032 "num_base_bdevs_discovered": 3, 00:16:58.032 "num_base_bdevs_operational": 3, 00:16:58.032 "base_bdevs_list": [ 00:16:58.032 { 00:16:58.032 "name": "BaseBdev1", 00:16:58.032 "uuid": "dfe422bb-0e76-5265-b116-93958969349f", 00:16:58.032 "is_configured": true, 00:16:58.032 "data_offset": 2048, 00:16:58.032 "data_size": 63488 00:16:58.032 }, 00:16:58.032 { 00:16:58.032 "name": "BaseBdev2", 00:16:58.032 "uuid": "af746b50-61a5-5510-b94f-8410885607ad", 00:16:58.032 "is_configured": true, 00:16:58.032 "data_offset": 2048, 00:16:58.032 "data_size": 63488 00:16:58.032 }, 00:16:58.032 { 00:16:58.032 "name": "BaseBdev3", 00:16:58.032 "uuid": "a30f7996-2493-5d12-8e50-81c80aebf673", 00:16:58.032 "is_configured": true, 00:16:58.032 "data_offset": 2048, 00:16:58.032 "data_size": 63488 00:16:58.032 } 00:16:58.032 ] 00:16:58.032 }' 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.032 22:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.292 22:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.292 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.292 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.292 [2024-12-09 22:58:14.043043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.292 [2024-12-09 22:58:14.043189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.292 [2024-12-09 22:58:14.046525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.292 [2024-12-09 22:58:14.046647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.292 [2024-12-09 22:58:14.046809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.292 [2024-12-09 22:58:14.046863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:58.292 { 00:16:58.292 "results": [ 00:16:58.292 { 00:16:58.292 "job": "raid_bdev1", 00:16:58.292 "core_mask": "0x1", 00:16:58.292 "workload": "randrw", 00:16:58.292 "percentage": 50, 00:16:58.292 "status": "finished", 00:16:58.292 "queue_depth": 1, 00:16:58.292 "io_size": 131072, 00:16:58.292 "runtime": 1.292099, 00:16:58.292 "iops": 8802.731060081309, 00:16:58.292 "mibps": 1100.3413825101636, 00:16:58.292 "io_failed": 0, 00:16:58.292 "io_timeout": 0, 00:16:58.292 "avg_latency_us": 110.38449662641297, 00:16:58.292 "min_latency_us": 26.941484716157206, 00:16:58.292 "max_latency_us": 1824.419213973799 00:16:58.292 } 00:16:58.292 ], 00:16:58.292 "core_count": 1 00:16:58.292 } 00:16:58.292 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.292 22:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69640 00:16:58.292 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69640 ']' 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69640 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69640 00:16:58.293 killing process with pid 69640 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69640' 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69640 00:16:58.293 22:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69640 00:16:58.293 [2024-12-09 22:58:14.081946] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.551 [2024-12-09 22:58:14.367666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.exmwCXs8MW 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:00.453 ************************************ 00:17:00.453 END TEST raid_read_error_test 00:17:00.453 ************************************ 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:00.453 00:17:00.453 real 0m4.924s 00:17:00.453 user 0m5.688s 00:17:00.453 sys 0m0.686s 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.453 22:58:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.453 22:58:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:00.453 22:58:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:00.453 22:58:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.453 22:58:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.453 ************************************ 00:17:00.453 START TEST raid_write_error_test 00:17:00.453 ************************************ 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zEe8Dqve6d 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69787 00:17:00.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69787 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69787 ']' 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.453 22:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.453 [2024-12-09 22:58:16.066300] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:00.453 [2024-12-09 22:58:16.066574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69787 ] 00:17:00.453 [2024-12-09 22:58:16.251786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.712 [2024-12-09 22:58:16.409565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.971 [2024-12-09 22:58:16.667444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.971 [2024-12-09 22:58:16.667651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.230 22:58:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.230 22:58:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:01.230 22:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:01.230 22:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.230 22:58:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.230 22:58:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.230 BaseBdev1_malloc 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.230 true 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.230 [2024-12-09 22:58:17.055222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:01.230 [2024-12-09 22:58:17.055292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.230 [2024-12-09 22:58:17.055317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:01.230 [2024-12-09 22:58:17.055332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.230 [2024-12-09 22:58:17.058236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.230 [2024-12-09 22:58:17.058334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.230 BaseBdev1 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.230 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 BaseBdev2_malloc 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 true 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 [2024-12-09 22:58:17.133950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:01.489 [2024-12-09 22:58:17.134016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.489 [2024-12-09 22:58:17.134033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:01.489 [2024-12-09 22:58:17.134045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.489 [2024-12-09 22:58:17.136727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.489 [2024-12-09 22:58:17.136842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.489 BaseBdev2 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 BaseBdev3_malloc 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 true 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 [2024-12-09 22:58:17.225212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:01.489 [2024-12-09 22:58:17.225285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.489 [2024-12-09 22:58:17.225310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:01.489 [2024-12-09 22:58:17.225325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.489 [2024-12-09 22:58:17.228290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.489 [2024-12-09 22:58:17.228403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:01.489 BaseBdev3 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.489 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.489 [2024-12-09 22:58:17.237323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.489 [2024-12-09 22:58:17.239839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.489 [2024-12-09 22:58:17.239932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.490 [2024-12-09 22:58:17.240195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.490 [2024-12-09 22:58:17.240210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:01.490 [2024-12-09 22:58:17.240568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:01.490 [2024-12-09 22:58:17.240792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.490 [2024-12-09 22:58:17.240807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:01.490 [2024-12-09 22:58:17.241017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.490 "name": "raid_bdev1", 00:17:01.490 "uuid": "db159af0-0cbf-4141-b81c-d33dceac4da2", 00:17:01.490 "strip_size_kb": 0, 00:17:01.490 "state": "online", 00:17:01.490 "raid_level": "raid1", 00:17:01.490 "superblock": true, 00:17:01.490 "num_base_bdevs": 3, 00:17:01.490 "num_base_bdevs_discovered": 3, 00:17:01.490 "num_base_bdevs_operational": 3, 00:17:01.490 "base_bdevs_list": [ 00:17:01.490 { 00:17:01.490 "name": "BaseBdev1", 00:17:01.490 "uuid": "90dd6971-53c6-57e9-bfb7-11be6f9cb937", 00:17:01.490 "is_configured": true, 00:17:01.490 "data_offset": 2048, 00:17:01.490 "data_size": 63488 00:17:01.490 }, 00:17:01.490 { 00:17:01.490 "name": "BaseBdev2", 00:17:01.490 "uuid": "4650a951-7049-5c7c-9efe-85b750fc12f6", 00:17:01.490 "is_configured": true, 00:17:01.490 "data_offset": 2048, 00:17:01.490 "data_size": 63488 00:17:01.490 }, 00:17:01.490 { 00:17:01.490 "name": "BaseBdev3", 00:17:01.490 "uuid": "de4a1ba3-d5de-5a81-ab04-f9ba3f701706", 00:17:01.490 "is_configured": true, 00:17:01.490 "data_offset": 2048, 00:17:01.490 "data_size": 63488 00:17:01.490 } 00:17:01.490 ] 00:17:01.490 }' 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.490 22:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.056 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:02.056 22:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:02.056 [2024-12-09 22:58:17.778029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.991 [2024-12-09 22:58:18.672608] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:02.991 [2024-12-09 22:58:18.672787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.991 [2024-12-09 22:58:18.673126] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.991 "name": "raid_bdev1", 00:17:02.991 "uuid": "db159af0-0cbf-4141-b81c-d33dceac4da2", 00:17:02.991 "strip_size_kb": 0, 00:17:02.991 "state": "online", 00:17:02.991 "raid_level": "raid1", 00:17:02.991 "superblock": true, 00:17:02.991 "num_base_bdevs": 3, 00:17:02.991 "num_base_bdevs_discovered": 2, 00:17:02.991 "num_base_bdevs_operational": 2, 00:17:02.991 "base_bdevs_list": [ 00:17:02.991 { 00:17:02.991 "name": null, 00:17:02.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.991 "is_configured": false, 00:17:02.991 "data_offset": 0, 00:17:02.991 "data_size": 63488 00:17:02.991 }, 00:17:02.991 { 00:17:02.991 "name": "BaseBdev2", 00:17:02.991 "uuid": "4650a951-7049-5c7c-9efe-85b750fc12f6", 00:17:02.991 "is_configured": true, 00:17:02.991 "data_offset": 2048, 00:17:02.991 "data_size": 63488 00:17:02.991 }, 00:17:02.991 { 00:17:02.991 "name": "BaseBdev3", 00:17:02.991 "uuid": "de4a1ba3-d5de-5a81-ab04-f9ba3f701706", 00:17:02.991 "is_configured": true, 00:17:02.991 "data_offset": 2048, 00:17:02.991 "data_size": 63488 00:17:02.991 } 00:17:02.991 ] 00:17:02.991 }' 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.991 22:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.556 22:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:03.556 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.556 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.556 [2024-12-09 22:58:19.116901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.556 [2024-12-09 22:58:19.117050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.556 [2024-12-09 22:58:19.120342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.556 [2024-12-09 22:58:19.120501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.556 [2024-12-09 22:58:19.120642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.556 [2024-12-09 22:58:19.120707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:03.556 { 00:17:03.556 "results": [ 00:17:03.556 { 00:17:03.556 "job": "raid_bdev1", 00:17:03.556 "core_mask": "0x1", 00:17:03.556 "workload": "randrw", 00:17:03.556 "percentage": 50, 00:17:03.556 "status": "finished", 00:17:03.556 "queue_depth": 1, 00:17:03.556 "io_size": 131072, 00:17:03.556 "runtime": 1.339269, 00:17:03.556 "iops": 9613.453309230632, 00:17:03.556 "mibps": 1201.681663653829, 00:17:03.556 "io_failed": 0, 00:17:03.556 "io_timeout": 0, 00:17:03.556 "avg_latency_us": 100.80300368847246, 00:17:03.556 "min_latency_us": 26.1589519650655, 00:17:03.556 "max_latency_us": 1674.172925764192 00:17:03.556 } 00:17:03.556 ], 00:17:03.556 "core_count": 1 00:17:03.556 } 00:17:03.556 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.556 22:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69787 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69787 ']' 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69787 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69787 00:17:03.557 killing process with pid 69787 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69787' 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69787 00:17:03.557 [2024-12-09 22:58:19.164953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.557 22:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69787 00:17:03.836 [2024-12-09 22:58:19.462115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zEe8Dqve6d 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:05.212 ************************************ 00:17:05.212 END TEST raid_write_error_test 00:17:05.212 ************************************ 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:05.212 00:17:05.212 real 0m5.024s 00:17:05.212 user 0m5.801s 00:17:05.212 sys 0m0.685s 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.212 22:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.212 22:58:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:17:05.212 22:58:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:05.212 22:58:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:05.212 22:58:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:05.212 22:58:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.212 22:58:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.212 ************************************ 00:17:05.212 START TEST raid_state_function_test 00:17:05.212 ************************************ 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69936 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69936' 00:17:05.212 Process raid pid: 69936 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69936 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69936 ']' 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.212 22:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.471 [2024-12-09 22:58:21.153260] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:05.471 [2024-12-09 22:58:21.154004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.729 [2024-12-09 22:58:21.342232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.729 [2024-12-09 22:58:21.509062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.987 [2024-12-09 22:58:21.794317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.987 [2024-12-09 22:58:21.794472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.244 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.244 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:06.244 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.244 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.244 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.244 [2024-12-09 22:58:22.060754] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.244 [2024-12-09 22:58:22.060938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.244 [2024-12-09 22:58:22.060977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.245 [2024-12-09 22:58:22.061008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.245 [2024-12-09 22:58:22.061030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.245 [2024-12-09 22:58:22.061057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.245 [2024-12-09 22:58:22.061079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.245 [2024-12-09 22:58:22.061122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.245 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.502 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.502 "name": "Existed_Raid", 00:17:06.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.502 "strip_size_kb": 64, 00:17:06.502 "state": "configuring", 00:17:06.502 "raid_level": "raid0", 00:17:06.502 "superblock": false, 00:17:06.502 "num_base_bdevs": 4, 00:17:06.502 "num_base_bdevs_discovered": 0, 00:17:06.502 "num_base_bdevs_operational": 4, 00:17:06.502 "base_bdevs_list": [ 00:17:06.502 { 00:17:06.502 "name": "BaseBdev1", 00:17:06.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.502 "is_configured": false, 00:17:06.502 "data_offset": 0, 00:17:06.502 "data_size": 0 00:17:06.502 }, 00:17:06.502 { 00:17:06.502 "name": "BaseBdev2", 00:17:06.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.502 "is_configured": false, 00:17:06.502 "data_offset": 0, 00:17:06.502 "data_size": 0 00:17:06.502 }, 00:17:06.502 { 00:17:06.502 "name": "BaseBdev3", 00:17:06.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.502 "is_configured": false, 00:17:06.502 "data_offset": 0, 00:17:06.502 "data_size": 0 00:17:06.502 }, 00:17:06.502 { 00:17:06.502 "name": "BaseBdev4", 00:17:06.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.502 "is_configured": false, 00:17:06.502 "data_offset": 0, 00:17:06.502 "data_size": 0 00:17:06.502 } 00:17:06.502 ] 00:17:06.502 }' 00:17:06.502 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.502 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.759 [2024-12-09 22:58:22.500514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.759 [2024-12-09 22:58:22.500584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.759 [2024-12-09 22:58:22.512494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.759 [2024-12-09 22:58:22.512562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.759 [2024-12-09 22:58:22.512573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.759 [2024-12-09 22:58:22.512586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.759 [2024-12-09 22:58:22.512594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.759 [2024-12-09 22:58:22.512605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.759 [2024-12-09 22:58:22.512612] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.759 [2024-12-09 22:58:22.512624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.759 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.759 [2024-12-09 22:58:22.578847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.760 BaseBdev1 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.760 [ 00:17:06.760 { 00:17:06.760 "name": "BaseBdev1", 00:17:06.760 "aliases": [ 00:17:06.760 "9738479a-358c-4bfb-90af-b72a418f45c1" 00:17:06.760 ], 00:17:06.760 "product_name": "Malloc disk", 00:17:06.760 "block_size": 512, 00:17:06.760 "num_blocks": 65536, 00:17:06.760 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:06.760 "assigned_rate_limits": { 00:17:06.760 "rw_ios_per_sec": 0, 00:17:06.760 "rw_mbytes_per_sec": 0, 00:17:06.760 "r_mbytes_per_sec": 0, 00:17:06.760 "w_mbytes_per_sec": 0 00:17:06.760 }, 00:17:06.760 "claimed": true, 00:17:06.760 "claim_type": "exclusive_write", 00:17:06.760 "zoned": false, 00:17:06.760 "supported_io_types": { 00:17:06.760 "read": true, 00:17:06.760 "write": true, 00:17:06.760 "unmap": true, 00:17:06.760 "flush": true, 00:17:06.760 "reset": true, 00:17:06.760 "nvme_admin": false, 00:17:06.760 "nvme_io": false, 00:17:06.760 "nvme_io_md": false, 00:17:06.760 "write_zeroes": true, 00:17:06.760 "zcopy": true, 00:17:06.760 "get_zone_info": false, 00:17:06.760 "zone_management": false, 00:17:06.760 "zone_append": false, 00:17:06.760 "compare": false, 00:17:06.760 "compare_and_write": false, 00:17:06.760 "abort": true, 00:17:06.760 "seek_hole": false, 00:17:06.760 "seek_data": false, 00:17:06.760 "copy": true, 00:17:06.760 "nvme_iov_md": false 00:17:06.760 }, 00:17:06.760 "memory_domains": [ 00:17:06.760 { 00:17:06.760 "dma_device_id": "system", 00:17:06.760 "dma_device_type": 1 00:17:06.760 }, 00:17:06.760 { 00:17:06.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.760 "dma_device_type": 2 00:17:06.760 } 00:17:06.760 ], 00:17:06.760 "driver_specific": {} 00:17:06.760 } 00:17:06.760 ] 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.760 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.019 "name": "Existed_Raid", 00:17:07.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.019 "strip_size_kb": 64, 00:17:07.019 "state": "configuring", 00:17:07.019 "raid_level": "raid0", 00:17:07.019 "superblock": false, 00:17:07.019 "num_base_bdevs": 4, 00:17:07.019 "num_base_bdevs_discovered": 1, 00:17:07.019 "num_base_bdevs_operational": 4, 00:17:07.019 "base_bdevs_list": [ 00:17:07.019 { 00:17:07.019 "name": "BaseBdev1", 00:17:07.019 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:07.019 "is_configured": true, 00:17:07.019 "data_offset": 0, 00:17:07.019 "data_size": 65536 00:17:07.019 }, 00:17:07.019 { 00:17:07.019 "name": "BaseBdev2", 00:17:07.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.019 "is_configured": false, 00:17:07.019 "data_offset": 0, 00:17:07.019 "data_size": 0 00:17:07.019 }, 00:17:07.019 { 00:17:07.019 "name": "BaseBdev3", 00:17:07.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.019 "is_configured": false, 00:17:07.019 "data_offset": 0, 00:17:07.019 "data_size": 0 00:17:07.019 }, 00:17:07.019 { 00:17:07.019 "name": "BaseBdev4", 00:17:07.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.019 "is_configured": false, 00:17:07.019 "data_offset": 0, 00:17:07.019 "data_size": 0 00:17:07.019 } 00:17:07.019 ] 00:17:07.019 }' 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.019 22:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 [2024-12-09 22:58:23.050352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.278 [2024-12-09 22:58:23.050506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 [2024-12-09 22:58:23.062414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.278 [2024-12-09 22:58:23.064677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.278 [2024-12-09 22:58:23.064774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.278 [2024-12-09 22:58:23.064812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.278 [2024-12-09 22:58:23.064850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.278 [2024-12-09 22:58:23.064889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:07.278 [2024-12-09 22:58:23.064916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.278 "name": "Existed_Raid", 00:17:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.278 "strip_size_kb": 64, 00:17:07.278 "state": "configuring", 00:17:07.278 "raid_level": "raid0", 00:17:07.278 "superblock": false, 00:17:07.278 "num_base_bdevs": 4, 00:17:07.278 "num_base_bdevs_discovered": 1, 00:17:07.278 "num_base_bdevs_operational": 4, 00:17:07.278 "base_bdevs_list": [ 00:17:07.278 { 00:17:07.278 "name": "BaseBdev1", 00:17:07.278 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:07.278 "is_configured": true, 00:17:07.278 "data_offset": 0, 00:17:07.278 "data_size": 65536 00:17:07.278 }, 00:17:07.278 { 00:17:07.278 "name": "BaseBdev2", 00:17:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.278 "is_configured": false, 00:17:07.278 "data_offset": 0, 00:17:07.278 "data_size": 0 00:17:07.278 }, 00:17:07.278 { 00:17:07.278 "name": "BaseBdev3", 00:17:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.278 "is_configured": false, 00:17:07.278 "data_offset": 0, 00:17:07.278 "data_size": 0 00:17:07.278 }, 00:17:07.278 { 00:17:07.278 "name": "BaseBdev4", 00:17:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.278 "is_configured": false, 00:17:07.278 "data_offset": 0, 00:17:07.278 "data_size": 0 00:17:07.278 } 00:17:07.278 ] 00:17:07.278 }' 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.278 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.847 [2024-12-09 22:58:23.549847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.847 BaseBdev2 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.847 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.847 [ 00:17:07.847 { 00:17:07.847 "name": "BaseBdev2", 00:17:07.847 "aliases": [ 00:17:07.847 "7f854a68-f849-429d-ad96-2b9f5b46ddb8" 00:17:07.847 ], 00:17:07.847 "product_name": "Malloc disk", 00:17:07.847 "block_size": 512, 00:17:07.847 "num_blocks": 65536, 00:17:07.847 "uuid": "7f854a68-f849-429d-ad96-2b9f5b46ddb8", 00:17:07.847 "assigned_rate_limits": { 00:17:07.847 "rw_ios_per_sec": 0, 00:17:07.847 "rw_mbytes_per_sec": 0, 00:17:07.847 "r_mbytes_per_sec": 0, 00:17:07.847 "w_mbytes_per_sec": 0 00:17:07.847 }, 00:17:07.847 "claimed": true, 00:17:07.847 "claim_type": "exclusive_write", 00:17:07.847 "zoned": false, 00:17:07.847 "supported_io_types": { 00:17:07.847 "read": true, 00:17:07.847 "write": true, 00:17:07.847 "unmap": true, 00:17:07.847 "flush": true, 00:17:07.847 "reset": true, 00:17:07.847 "nvme_admin": false, 00:17:07.847 "nvme_io": false, 00:17:07.847 "nvme_io_md": false, 00:17:07.847 "write_zeroes": true, 00:17:07.847 "zcopy": true, 00:17:07.847 "get_zone_info": false, 00:17:07.847 "zone_management": false, 00:17:07.847 "zone_append": false, 00:17:07.847 "compare": false, 00:17:07.847 "compare_and_write": false, 00:17:07.847 "abort": true, 00:17:07.847 "seek_hole": false, 00:17:07.847 "seek_data": false, 00:17:07.847 "copy": true, 00:17:07.847 "nvme_iov_md": false 00:17:07.847 }, 00:17:07.847 "memory_domains": [ 00:17:07.847 { 00:17:07.847 "dma_device_id": "system", 00:17:07.847 "dma_device_type": 1 00:17:07.847 }, 00:17:07.847 { 00:17:07.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.847 "dma_device_type": 2 00:17:07.847 } 00:17:07.847 ], 00:17:07.847 "driver_specific": {} 00:17:07.847 } 00:17:07.847 ] 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.848 "name": "Existed_Raid", 00:17:07.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.848 "strip_size_kb": 64, 00:17:07.848 "state": "configuring", 00:17:07.848 "raid_level": "raid0", 00:17:07.848 "superblock": false, 00:17:07.848 "num_base_bdevs": 4, 00:17:07.848 "num_base_bdevs_discovered": 2, 00:17:07.848 "num_base_bdevs_operational": 4, 00:17:07.848 "base_bdevs_list": [ 00:17:07.848 { 00:17:07.848 "name": "BaseBdev1", 00:17:07.848 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:07.848 "is_configured": true, 00:17:07.848 "data_offset": 0, 00:17:07.848 "data_size": 65536 00:17:07.848 }, 00:17:07.848 { 00:17:07.848 "name": "BaseBdev2", 00:17:07.848 "uuid": "7f854a68-f849-429d-ad96-2b9f5b46ddb8", 00:17:07.848 "is_configured": true, 00:17:07.848 "data_offset": 0, 00:17:07.848 "data_size": 65536 00:17:07.848 }, 00:17:07.848 { 00:17:07.848 "name": "BaseBdev3", 00:17:07.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.848 "is_configured": false, 00:17:07.848 "data_offset": 0, 00:17:07.848 "data_size": 0 00:17:07.848 }, 00:17:07.848 { 00:17:07.848 "name": "BaseBdev4", 00:17:07.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.848 "is_configured": false, 00:17:07.848 "data_offset": 0, 00:17:07.848 "data_size": 0 00:17:07.848 } 00:17:07.848 ] 00:17:07.848 }' 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.848 22:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.450 [2024-12-09 22:58:24.114809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.450 BaseBdev3 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.450 [ 00:17:08.450 { 00:17:08.450 "name": "BaseBdev3", 00:17:08.450 "aliases": [ 00:17:08.450 "cdd970bc-9e7b-47a6-bab4-cfae24919733" 00:17:08.450 ], 00:17:08.450 "product_name": "Malloc disk", 00:17:08.450 "block_size": 512, 00:17:08.450 "num_blocks": 65536, 00:17:08.450 "uuid": "cdd970bc-9e7b-47a6-bab4-cfae24919733", 00:17:08.450 "assigned_rate_limits": { 00:17:08.450 "rw_ios_per_sec": 0, 00:17:08.450 "rw_mbytes_per_sec": 0, 00:17:08.450 "r_mbytes_per_sec": 0, 00:17:08.450 "w_mbytes_per_sec": 0 00:17:08.450 }, 00:17:08.450 "claimed": true, 00:17:08.450 "claim_type": "exclusive_write", 00:17:08.450 "zoned": false, 00:17:08.450 "supported_io_types": { 00:17:08.450 "read": true, 00:17:08.450 "write": true, 00:17:08.450 "unmap": true, 00:17:08.450 "flush": true, 00:17:08.450 "reset": true, 00:17:08.450 "nvme_admin": false, 00:17:08.450 "nvme_io": false, 00:17:08.450 "nvme_io_md": false, 00:17:08.450 "write_zeroes": true, 00:17:08.450 "zcopy": true, 00:17:08.450 "get_zone_info": false, 00:17:08.450 "zone_management": false, 00:17:08.450 "zone_append": false, 00:17:08.450 "compare": false, 00:17:08.450 "compare_and_write": false, 00:17:08.450 "abort": true, 00:17:08.450 "seek_hole": false, 00:17:08.450 "seek_data": false, 00:17:08.450 "copy": true, 00:17:08.450 "nvme_iov_md": false 00:17:08.450 }, 00:17:08.450 "memory_domains": [ 00:17:08.450 { 00:17:08.450 "dma_device_id": "system", 00:17:08.450 "dma_device_type": 1 00:17:08.450 }, 00:17:08.450 { 00:17:08.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.450 "dma_device_type": 2 00:17:08.450 } 00:17:08.450 ], 00:17:08.450 "driver_specific": {} 00:17:08.450 } 00:17:08.450 ] 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.450 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.450 "name": "Existed_Raid", 00:17:08.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.450 "strip_size_kb": 64, 00:17:08.450 "state": "configuring", 00:17:08.450 "raid_level": "raid0", 00:17:08.450 "superblock": false, 00:17:08.450 "num_base_bdevs": 4, 00:17:08.450 "num_base_bdevs_discovered": 3, 00:17:08.450 "num_base_bdevs_operational": 4, 00:17:08.450 "base_bdevs_list": [ 00:17:08.450 { 00:17:08.450 "name": "BaseBdev1", 00:17:08.450 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:08.450 "is_configured": true, 00:17:08.450 "data_offset": 0, 00:17:08.450 "data_size": 65536 00:17:08.450 }, 00:17:08.450 { 00:17:08.450 "name": "BaseBdev2", 00:17:08.450 "uuid": "7f854a68-f849-429d-ad96-2b9f5b46ddb8", 00:17:08.450 "is_configured": true, 00:17:08.450 "data_offset": 0, 00:17:08.450 "data_size": 65536 00:17:08.450 }, 00:17:08.450 { 00:17:08.450 "name": "BaseBdev3", 00:17:08.450 "uuid": "cdd970bc-9e7b-47a6-bab4-cfae24919733", 00:17:08.451 "is_configured": true, 00:17:08.451 "data_offset": 0, 00:17:08.451 "data_size": 65536 00:17:08.451 }, 00:17:08.451 { 00:17:08.451 "name": "BaseBdev4", 00:17:08.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.451 "is_configured": false, 00:17:08.451 "data_offset": 0, 00:17:08.451 "data_size": 0 00:17:08.451 } 00:17:08.451 ] 00:17:08.451 }' 00:17:08.451 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.451 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.024 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:09.024 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.024 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.025 [2024-12-09 22:58:24.706400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.025 BaseBdev4 00:17:09.025 [2024-12-09 22:58:24.706602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:09.025 [2024-12-09 22:58:24.706623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:09.025 [2024-12-09 22:58:24.706956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:09.025 [2024-12-09 22:58:24.707144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:09.025 [2024-12-09 22:58:24.707158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:09.025 [2024-12-09 22:58:24.707495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.025 [ 00:17:09.025 { 00:17:09.025 "name": "BaseBdev4", 00:17:09.025 "aliases": [ 00:17:09.025 "edbb47b8-abf9-49ca-aac9-0f4ffb52653c" 00:17:09.025 ], 00:17:09.025 "product_name": "Malloc disk", 00:17:09.025 "block_size": 512, 00:17:09.025 "num_blocks": 65536, 00:17:09.025 "uuid": "edbb47b8-abf9-49ca-aac9-0f4ffb52653c", 00:17:09.025 "assigned_rate_limits": { 00:17:09.025 "rw_ios_per_sec": 0, 00:17:09.025 "rw_mbytes_per_sec": 0, 00:17:09.025 "r_mbytes_per_sec": 0, 00:17:09.025 "w_mbytes_per_sec": 0 00:17:09.025 }, 00:17:09.025 "claimed": true, 00:17:09.025 "claim_type": "exclusive_write", 00:17:09.025 "zoned": false, 00:17:09.025 "supported_io_types": { 00:17:09.025 "read": true, 00:17:09.025 "write": true, 00:17:09.025 "unmap": true, 00:17:09.025 "flush": true, 00:17:09.025 "reset": true, 00:17:09.025 "nvme_admin": false, 00:17:09.025 "nvme_io": false, 00:17:09.025 "nvme_io_md": false, 00:17:09.025 "write_zeroes": true, 00:17:09.025 "zcopy": true, 00:17:09.025 "get_zone_info": false, 00:17:09.025 "zone_management": false, 00:17:09.025 "zone_append": false, 00:17:09.025 "compare": false, 00:17:09.025 "compare_and_write": false, 00:17:09.025 "abort": true, 00:17:09.025 "seek_hole": false, 00:17:09.025 "seek_data": false, 00:17:09.025 "copy": true, 00:17:09.025 "nvme_iov_md": false 00:17:09.025 }, 00:17:09.025 "memory_domains": [ 00:17:09.025 { 00:17:09.025 "dma_device_id": "system", 00:17:09.025 "dma_device_type": 1 00:17:09.025 }, 00:17:09.025 { 00:17:09.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.025 "dma_device_type": 2 00:17:09.025 } 00:17:09.025 ], 00:17:09.025 "driver_specific": {} 00:17:09.025 } 00:17:09.025 ] 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.025 "name": "Existed_Raid", 00:17:09.025 "uuid": "03f5f78a-ae1f-413c-90b0-02ed587c50ff", 00:17:09.025 "strip_size_kb": 64, 00:17:09.025 "state": "online", 00:17:09.025 "raid_level": "raid0", 00:17:09.025 "superblock": false, 00:17:09.025 "num_base_bdevs": 4, 00:17:09.025 "num_base_bdevs_discovered": 4, 00:17:09.025 "num_base_bdevs_operational": 4, 00:17:09.025 "base_bdevs_list": [ 00:17:09.025 { 00:17:09.025 "name": "BaseBdev1", 00:17:09.025 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:09.025 "is_configured": true, 00:17:09.025 "data_offset": 0, 00:17:09.025 "data_size": 65536 00:17:09.025 }, 00:17:09.025 { 00:17:09.025 "name": "BaseBdev2", 00:17:09.025 "uuid": "7f854a68-f849-429d-ad96-2b9f5b46ddb8", 00:17:09.025 "is_configured": true, 00:17:09.025 "data_offset": 0, 00:17:09.025 "data_size": 65536 00:17:09.025 }, 00:17:09.025 { 00:17:09.025 "name": "BaseBdev3", 00:17:09.025 "uuid": "cdd970bc-9e7b-47a6-bab4-cfae24919733", 00:17:09.025 "is_configured": true, 00:17:09.025 "data_offset": 0, 00:17:09.025 "data_size": 65536 00:17:09.025 }, 00:17:09.025 { 00:17:09.025 "name": "BaseBdev4", 00:17:09.025 "uuid": "edbb47b8-abf9-49ca-aac9-0f4ffb52653c", 00:17:09.025 "is_configured": true, 00:17:09.025 "data_offset": 0, 00:17:09.025 "data_size": 65536 00:17:09.025 } 00:17:09.025 ] 00:17:09.025 }' 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.025 22:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.594 [2024-12-09 22:58:25.186094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.594 "name": "Existed_Raid", 00:17:09.594 "aliases": [ 00:17:09.594 "03f5f78a-ae1f-413c-90b0-02ed587c50ff" 00:17:09.594 ], 00:17:09.594 "product_name": "Raid Volume", 00:17:09.594 "block_size": 512, 00:17:09.594 "num_blocks": 262144, 00:17:09.594 "uuid": "03f5f78a-ae1f-413c-90b0-02ed587c50ff", 00:17:09.594 "assigned_rate_limits": { 00:17:09.594 "rw_ios_per_sec": 0, 00:17:09.594 "rw_mbytes_per_sec": 0, 00:17:09.594 "r_mbytes_per_sec": 0, 00:17:09.594 "w_mbytes_per_sec": 0 00:17:09.594 }, 00:17:09.594 "claimed": false, 00:17:09.594 "zoned": false, 00:17:09.594 "supported_io_types": { 00:17:09.594 "read": true, 00:17:09.594 "write": true, 00:17:09.594 "unmap": true, 00:17:09.594 "flush": true, 00:17:09.594 "reset": true, 00:17:09.594 "nvme_admin": false, 00:17:09.594 "nvme_io": false, 00:17:09.594 "nvme_io_md": false, 00:17:09.594 "write_zeroes": true, 00:17:09.594 "zcopy": false, 00:17:09.594 "get_zone_info": false, 00:17:09.594 "zone_management": false, 00:17:09.594 "zone_append": false, 00:17:09.594 "compare": false, 00:17:09.594 "compare_and_write": false, 00:17:09.594 "abort": false, 00:17:09.594 "seek_hole": false, 00:17:09.594 "seek_data": false, 00:17:09.594 "copy": false, 00:17:09.594 "nvme_iov_md": false 00:17:09.594 }, 00:17:09.594 "memory_domains": [ 00:17:09.594 { 00:17:09.594 "dma_device_id": "system", 00:17:09.594 "dma_device_type": 1 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.594 "dma_device_type": 2 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "system", 00:17:09.594 "dma_device_type": 1 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.594 "dma_device_type": 2 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "system", 00:17:09.594 "dma_device_type": 1 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.594 "dma_device_type": 2 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "system", 00:17:09.594 "dma_device_type": 1 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.594 "dma_device_type": 2 00:17:09.594 } 00:17:09.594 ], 00:17:09.594 "driver_specific": { 00:17:09.594 "raid": { 00:17:09.594 "uuid": "03f5f78a-ae1f-413c-90b0-02ed587c50ff", 00:17:09.594 "strip_size_kb": 64, 00:17:09.594 "state": "online", 00:17:09.594 "raid_level": "raid0", 00:17:09.594 "superblock": false, 00:17:09.594 "num_base_bdevs": 4, 00:17:09.594 "num_base_bdevs_discovered": 4, 00:17:09.594 "num_base_bdevs_operational": 4, 00:17:09.594 "base_bdevs_list": [ 00:17:09.594 { 00:17:09.594 "name": "BaseBdev1", 00:17:09.594 "uuid": "9738479a-358c-4bfb-90af-b72a418f45c1", 00:17:09.594 "is_configured": true, 00:17:09.594 "data_offset": 0, 00:17:09.594 "data_size": 65536 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "name": "BaseBdev2", 00:17:09.594 "uuid": "7f854a68-f849-429d-ad96-2b9f5b46ddb8", 00:17:09.594 "is_configured": true, 00:17:09.594 "data_offset": 0, 00:17:09.594 "data_size": 65536 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "name": "BaseBdev3", 00:17:09.594 "uuid": "cdd970bc-9e7b-47a6-bab4-cfae24919733", 00:17:09.594 "is_configured": true, 00:17:09.594 "data_offset": 0, 00:17:09.594 "data_size": 65536 00:17:09.594 }, 00:17:09.594 { 00:17:09.594 "name": "BaseBdev4", 00:17:09.594 "uuid": "edbb47b8-abf9-49ca-aac9-0f4ffb52653c", 00:17:09.594 "is_configured": true, 00:17:09.594 "data_offset": 0, 00:17:09.594 "data_size": 65536 00:17:09.594 } 00:17:09.594 ] 00:17:09.594 } 00:17:09.594 } 00:17:09.594 }' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:09.594 BaseBdev2 00:17:09.594 BaseBdev3 00:17:09.594 BaseBdev4' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.594 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.853 [2024-12-09 22:58:25.509256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.853 [2024-12-09 22:58:25.509291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.853 [2024-12-09 22:58:25.509347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.853 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.854 "name": "Existed_Raid", 00:17:09.854 "uuid": "03f5f78a-ae1f-413c-90b0-02ed587c50ff", 00:17:09.854 "strip_size_kb": 64, 00:17:09.854 "state": "offline", 00:17:09.854 "raid_level": "raid0", 00:17:09.854 "superblock": false, 00:17:09.854 "num_base_bdevs": 4, 00:17:09.854 "num_base_bdevs_discovered": 3, 00:17:09.854 "num_base_bdevs_operational": 3, 00:17:09.854 "base_bdevs_list": [ 00:17:09.854 { 00:17:09.854 "name": null, 00:17:09.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.854 "is_configured": false, 00:17:09.854 "data_offset": 0, 00:17:09.854 "data_size": 65536 00:17:09.854 }, 00:17:09.854 { 00:17:09.854 "name": "BaseBdev2", 00:17:09.854 "uuid": "7f854a68-f849-429d-ad96-2b9f5b46ddb8", 00:17:09.854 "is_configured": true, 00:17:09.854 "data_offset": 0, 00:17:09.854 "data_size": 65536 00:17:09.854 }, 00:17:09.854 { 00:17:09.854 "name": "BaseBdev3", 00:17:09.854 "uuid": "cdd970bc-9e7b-47a6-bab4-cfae24919733", 00:17:09.854 "is_configured": true, 00:17:09.854 "data_offset": 0, 00:17:09.854 "data_size": 65536 00:17:09.854 }, 00:17:09.854 { 00:17:09.854 "name": "BaseBdev4", 00:17:09.854 "uuid": "edbb47b8-abf9-49ca-aac9-0f4ffb52653c", 00:17:09.854 "is_configured": true, 00:17:09.854 "data_offset": 0, 00:17:09.854 "data_size": 65536 00:17:09.854 } 00:17:09.854 ] 00:17:09.854 }' 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.854 22:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.422 [2024-12-09 22:58:26.099398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.422 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.422 [2024-12-09 22:58:26.254851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.681 [2024-12-09 22:58:26.400951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:10.681 [2024-12-09 22:58:26.401053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.681 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.940 BaseBdev2 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:10.940 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.941 [ 00:17:10.941 { 00:17:10.941 "name": "BaseBdev2", 00:17:10.941 "aliases": [ 00:17:10.941 "89267232-bc55-480f-b5cf-01bc41384268" 00:17:10.941 ], 00:17:10.941 "product_name": "Malloc disk", 00:17:10.941 "block_size": 512, 00:17:10.941 "num_blocks": 65536, 00:17:10.941 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:10.941 "assigned_rate_limits": { 00:17:10.941 "rw_ios_per_sec": 0, 00:17:10.941 "rw_mbytes_per_sec": 0, 00:17:10.941 "r_mbytes_per_sec": 0, 00:17:10.941 "w_mbytes_per_sec": 0 00:17:10.941 }, 00:17:10.941 "claimed": false, 00:17:10.941 "zoned": false, 00:17:10.941 "supported_io_types": { 00:17:10.941 "read": true, 00:17:10.941 "write": true, 00:17:10.941 "unmap": true, 00:17:10.941 "flush": true, 00:17:10.941 "reset": true, 00:17:10.941 "nvme_admin": false, 00:17:10.941 "nvme_io": false, 00:17:10.941 "nvme_io_md": false, 00:17:10.941 "write_zeroes": true, 00:17:10.941 "zcopy": true, 00:17:10.941 "get_zone_info": false, 00:17:10.941 "zone_management": false, 00:17:10.941 "zone_append": false, 00:17:10.941 "compare": false, 00:17:10.941 "compare_and_write": false, 00:17:10.941 "abort": true, 00:17:10.941 "seek_hole": false, 00:17:10.941 "seek_data": false, 00:17:10.941 "copy": true, 00:17:10.941 "nvme_iov_md": false 00:17:10.941 }, 00:17:10.941 "memory_domains": [ 00:17:10.941 { 00:17:10.941 "dma_device_id": "system", 00:17:10.941 "dma_device_type": 1 00:17:10.941 }, 00:17:10.941 { 00:17:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.941 "dma_device_type": 2 00:17:10.941 } 00:17:10.941 ], 00:17:10.941 "driver_specific": {} 00:17:10.941 } 00:17:10.941 ] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.941 BaseBdev3 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.941 [ 00:17:10.941 { 00:17:10.941 "name": "BaseBdev3", 00:17:10.941 "aliases": [ 00:17:10.941 "63129368-8e64-43e2-a189-885307f98755" 00:17:10.941 ], 00:17:10.941 "product_name": "Malloc disk", 00:17:10.941 "block_size": 512, 00:17:10.941 "num_blocks": 65536, 00:17:10.941 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:10.941 "assigned_rate_limits": { 00:17:10.941 "rw_ios_per_sec": 0, 00:17:10.941 "rw_mbytes_per_sec": 0, 00:17:10.941 "r_mbytes_per_sec": 0, 00:17:10.941 "w_mbytes_per_sec": 0 00:17:10.941 }, 00:17:10.941 "claimed": false, 00:17:10.941 "zoned": false, 00:17:10.941 "supported_io_types": { 00:17:10.941 "read": true, 00:17:10.941 "write": true, 00:17:10.941 "unmap": true, 00:17:10.941 "flush": true, 00:17:10.941 "reset": true, 00:17:10.941 "nvme_admin": false, 00:17:10.941 "nvme_io": false, 00:17:10.941 "nvme_io_md": false, 00:17:10.941 "write_zeroes": true, 00:17:10.941 "zcopy": true, 00:17:10.941 "get_zone_info": false, 00:17:10.941 "zone_management": false, 00:17:10.941 "zone_append": false, 00:17:10.941 "compare": false, 00:17:10.941 "compare_and_write": false, 00:17:10.941 "abort": true, 00:17:10.941 "seek_hole": false, 00:17:10.941 "seek_data": false, 00:17:10.941 "copy": true, 00:17:10.941 "nvme_iov_md": false 00:17:10.941 }, 00:17:10.941 "memory_domains": [ 00:17:10.941 { 00:17:10.941 "dma_device_id": "system", 00:17:10.941 "dma_device_type": 1 00:17:10.941 }, 00:17:10.941 { 00:17:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.941 "dma_device_type": 2 00:17:10.941 } 00:17:10.941 ], 00:17:10.941 "driver_specific": {} 00:17:10.941 } 00:17:10.941 ] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.941 BaseBdev4 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.941 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.201 [ 00:17:11.201 { 00:17:11.201 "name": "BaseBdev4", 00:17:11.201 "aliases": [ 00:17:11.201 "8d4a4d55-f987-4a37-b913-a373b13fe704" 00:17:11.201 ], 00:17:11.201 "product_name": "Malloc disk", 00:17:11.201 "block_size": 512, 00:17:11.201 "num_blocks": 65536, 00:17:11.201 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:11.201 "assigned_rate_limits": { 00:17:11.201 "rw_ios_per_sec": 0, 00:17:11.201 "rw_mbytes_per_sec": 0, 00:17:11.201 "r_mbytes_per_sec": 0, 00:17:11.201 "w_mbytes_per_sec": 0 00:17:11.201 }, 00:17:11.201 "claimed": false, 00:17:11.201 "zoned": false, 00:17:11.201 "supported_io_types": { 00:17:11.201 "read": true, 00:17:11.201 "write": true, 00:17:11.201 "unmap": true, 00:17:11.201 "flush": true, 00:17:11.201 "reset": true, 00:17:11.201 "nvme_admin": false, 00:17:11.201 "nvme_io": false, 00:17:11.201 "nvme_io_md": false, 00:17:11.201 "write_zeroes": true, 00:17:11.201 "zcopy": true, 00:17:11.201 "get_zone_info": false, 00:17:11.201 "zone_management": false, 00:17:11.201 "zone_append": false, 00:17:11.201 "compare": false, 00:17:11.201 "compare_and_write": false, 00:17:11.201 "abort": true, 00:17:11.201 "seek_hole": false, 00:17:11.201 "seek_data": false, 00:17:11.201 "copy": true, 00:17:11.201 "nvme_iov_md": false 00:17:11.201 }, 00:17:11.201 "memory_domains": [ 00:17:11.201 { 00:17:11.201 "dma_device_id": "system", 00:17:11.201 "dma_device_type": 1 00:17:11.201 }, 00:17:11.201 { 00:17:11.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.201 "dma_device_type": 2 00:17:11.201 } 00:17:11.201 ], 00:17:11.201 "driver_specific": {} 00:17:11.201 } 00:17:11.201 ] 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.201 [2024-12-09 22:58:26.829401] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.201 [2024-12-09 22:58:26.829548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.201 [2024-12-09 22:58:26.829645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.201 [2024-12-09 22:58:26.831918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.201 [2024-12-09 22:58:26.832041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.201 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.201 "name": "Existed_Raid", 00:17:11.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.201 "strip_size_kb": 64, 00:17:11.201 "state": "configuring", 00:17:11.201 "raid_level": "raid0", 00:17:11.201 "superblock": false, 00:17:11.201 "num_base_bdevs": 4, 00:17:11.201 "num_base_bdevs_discovered": 3, 00:17:11.201 "num_base_bdevs_operational": 4, 00:17:11.201 "base_bdevs_list": [ 00:17:11.201 { 00:17:11.201 "name": "BaseBdev1", 00:17:11.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.201 "is_configured": false, 00:17:11.201 "data_offset": 0, 00:17:11.201 "data_size": 0 00:17:11.201 }, 00:17:11.201 { 00:17:11.201 "name": "BaseBdev2", 00:17:11.201 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:11.201 "is_configured": true, 00:17:11.201 "data_offset": 0, 00:17:11.201 "data_size": 65536 00:17:11.201 }, 00:17:11.202 { 00:17:11.202 "name": "BaseBdev3", 00:17:11.202 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:11.202 "is_configured": true, 00:17:11.202 "data_offset": 0, 00:17:11.202 "data_size": 65536 00:17:11.202 }, 00:17:11.202 { 00:17:11.202 "name": "BaseBdev4", 00:17:11.202 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:11.202 "is_configured": true, 00:17:11.202 "data_offset": 0, 00:17:11.202 "data_size": 65536 00:17:11.202 } 00:17:11.202 ] 00:17:11.202 }' 00:17:11.202 22:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.202 22:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 [2024-12-09 22:58:27.304597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.461 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.720 "name": "Existed_Raid", 00:17:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.720 "strip_size_kb": 64, 00:17:11.720 "state": "configuring", 00:17:11.720 "raid_level": "raid0", 00:17:11.720 "superblock": false, 00:17:11.720 "num_base_bdevs": 4, 00:17:11.720 "num_base_bdevs_discovered": 2, 00:17:11.720 "num_base_bdevs_operational": 4, 00:17:11.720 "base_bdevs_list": [ 00:17:11.720 { 00:17:11.720 "name": "BaseBdev1", 00:17:11.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.720 "is_configured": false, 00:17:11.720 "data_offset": 0, 00:17:11.720 "data_size": 0 00:17:11.720 }, 00:17:11.720 { 00:17:11.720 "name": null, 00:17:11.720 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:11.720 "is_configured": false, 00:17:11.720 "data_offset": 0, 00:17:11.720 "data_size": 65536 00:17:11.720 }, 00:17:11.720 { 00:17:11.720 "name": "BaseBdev3", 00:17:11.720 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:11.720 "is_configured": true, 00:17:11.720 "data_offset": 0, 00:17:11.720 "data_size": 65536 00:17:11.720 }, 00:17:11.720 { 00:17:11.720 "name": "BaseBdev4", 00:17:11.720 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:11.720 "is_configured": true, 00:17:11.720 "data_offset": 0, 00:17:11.720 "data_size": 65536 00:17:11.720 } 00:17:11.720 ] 00:17:11.720 }' 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.720 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.978 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 [2024-12-09 22:58:27.851061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.237 BaseBdev1 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.237 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 [ 00:17:12.237 { 00:17:12.237 "name": "BaseBdev1", 00:17:12.237 "aliases": [ 00:17:12.237 "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676" 00:17:12.237 ], 00:17:12.237 "product_name": "Malloc disk", 00:17:12.237 "block_size": 512, 00:17:12.237 "num_blocks": 65536, 00:17:12.237 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:12.237 "assigned_rate_limits": { 00:17:12.237 "rw_ios_per_sec": 0, 00:17:12.237 "rw_mbytes_per_sec": 0, 00:17:12.237 "r_mbytes_per_sec": 0, 00:17:12.237 "w_mbytes_per_sec": 0 00:17:12.237 }, 00:17:12.237 "claimed": true, 00:17:12.237 "claim_type": "exclusive_write", 00:17:12.237 "zoned": false, 00:17:12.237 "supported_io_types": { 00:17:12.237 "read": true, 00:17:12.237 "write": true, 00:17:12.237 "unmap": true, 00:17:12.237 "flush": true, 00:17:12.237 "reset": true, 00:17:12.238 "nvme_admin": false, 00:17:12.238 "nvme_io": false, 00:17:12.238 "nvme_io_md": false, 00:17:12.238 "write_zeroes": true, 00:17:12.238 "zcopy": true, 00:17:12.238 "get_zone_info": false, 00:17:12.238 "zone_management": false, 00:17:12.238 "zone_append": false, 00:17:12.238 "compare": false, 00:17:12.238 "compare_and_write": false, 00:17:12.238 "abort": true, 00:17:12.238 "seek_hole": false, 00:17:12.238 "seek_data": false, 00:17:12.238 "copy": true, 00:17:12.238 "nvme_iov_md": false 00:17:12.238 }, 00:17:12.238 "memory_domains": [ 00:17:12.238 { 00:17:12.238 "dma_device_id": "system", 00:17:12.238 "dma_device_type": 1 00:17:12.238 }, 00:17:12.238 { 00:17:12.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.238 "dma_device_type": 2 00:17:12.238 } 00:17:12.238 ], 00:17:12.238 "driver_specific": {} 00:17:12.238 } 00:17:12.238 ] 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.238 "name": "Existed_Raid", 00:17:12.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.238 "strip_size_kb": 64, 00:17:12.238 "state": "configuring", 00:17:12.238 "raid_level": "raid0", 00:17:12.238 "superblock": false, 00:17:12.238 "num_base_bdevs": 4, 00:17:12.238 "num_base_bdevs_discovered": 3, 00:17:12.238 "num_base_bdevs_operational": 4, 00:17:12.238 "base_bdevs_list": [ 00:17:12.238 { 00:17:12.238 "name": "BaseBdev1", 00:17:12.238 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:12.238 "is_configured": true, 00:17:12.238 "data_offset": 0, 00:17:12.238 "data_size": 65536 00:17:12.238 }, 00:17:12.238 { 00:17:12.238 "name": null, 00:17:12.238 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:12.238 "is_configured": false, 00:17:12.238 "data_offset": 0, 00:17:12.238 "data_size": 65536 00:17:12.238 }, 00:17:12.238 { 00:17:12.238 "name": "BaseBdev3", 00:17:12.238 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:12.238 "is_configured": true, 00:17:12.238 "data_offset": 0, 00:17:12.238 "data_size": 65536 00:17:12.238 }, 00:17:12.238 { 00:17:12.238 "name": "BaseBdev4", 00:17:12.238 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:12.238 "is_configured": true, 00:17:12.238 "data_offset": 0, 00:17:12.238 "data_size": 65536 00:17:12.238 } 00:17:12.238 ] 00:17:12.238 }' 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.238 22:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.547 [2024-12-09 22:58:28.326411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.547 "name": "Existed_Raid", 00:17:12.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.547 "strip_size_kb": 64, 00:17:12.547 "state": "configuring", 00:17:12.547 "raid_level": "raid0", 00:17:12.547 "superblock": false, 00:17:12.547 "num_base_bdevs": 4, 00:17:12.547 "num_base_bdevs_discovered": 2, 00:17:12.547 "num_base_bdevs_operational": 4, 00:17:12.547 "base_bdevs_list": [ 00:17:12.547 { 00:17:12.547 "name": "BaseBdev1", 00:17:12.547 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:12.547 "is_configured": true, 00:17:12.547 "data_offset": 0, 00:17:12.547 "data_size": 65536 00:17:12.547 }, 00:17:12.547 { 00:17:12.547 "name": null, 00:17:12.547 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:12.547 "is_configured": false, 00:17:12.547 "data_offset": 0, 00:17:12.547 "data_size": 65536 00:17:12.547 }, 00:17:12.547 { 00:17:12.547 "name": null, 00:17:12.547 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:12.547 "is_configured": false, 00:17:12.547 "data_offset": 0, 00:17:12.547 "data_size": 65536 00:17:12.547 }, 00:17:12.547 { 00:17:12.547 "name": "BaseBdev4", 00:17:12.547 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:12.547 "is_configured": true, 00:17:12.547 "data_offset": 0, 00:17:12.547 "data_size": 65536 00:17:12.547 } 00:17:12.547 ] 00:17:12.547 }' 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.547 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 [2024-12-09 22:58:28.825566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.129 "name": "Existed_Raid", 00:17:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.129 "strip_size_kb": 64, 00:17:13.129 "state": "configuring", 00:17:13.129 "raid_level": "raid0", 00:17:13.129 "superblock": false, 00:17:13.129 "num_base_bdevs": 4, 00:17:13.129 "num_base_bdevs_discovered": 3, 00:17:13.129 "num_base_bdevs_operational": 4, 00:17:13.129 "base_bdevs_list": [ 00:17:13.129 { 00:17:13.129 "name": "BaseBdev1", 00:17:13.129 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:13.129 "is_configured": true, 00:17:13.129 "data_offset": 0, 00:17:13.129 "data_size": 65536 00:17:13.129 }, 00:17:13.129 { 00:17:13.129 "name": null, 00:17:13.129 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:13.129 "is_configured": false, 00:17:13.129 "data_offset": 0, 00:17:13.129 "data_size": 65536 00:17:13.129 }, 00:17:13.129 { 00:17:13.129 "name": "BaseBdev3", 00:17:13.129 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:13.129 "is_configured": true, 00:17:13.129 "data_offset": 0, 00:17:13.129 "data_size": 65536 00:17:13.129 }, 00:17:13.129 { 00:17:13.129 "name": "BaseBdev4", 00:17:13.129 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:13.129 "is_configured": true, 00:17:13.129 "data_offset": 0, 00:17:13.129 "data_size": 65536 00:17:13.129 } 00:17:13.129 ] 00:17:13.129 }' 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.129 22:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.693 [2024-12-09 22:58:29.344747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.693 "name": "Existed_Raid", 00:17:13.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.693 "strip_size_kb": 64, 00:17:13.693 "state": "configuring", 00:17:13.693 "raid_level": "raid0", 00:17:13.693 "superblock": false, 00:17:13.693 "num_base_bdevs": 4, 00:17:13.693 "num_base_bdevs_discovered": 2, 00:17:13.693 "num_base_bdevs_operational": 4, 00:17:13.693 "base_bdevs_list": [ 00:17:13.693 { 00:17:13.693 "name": null, 00:17:13.693 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:13.693 "is_configured": false, 00:17:13.693 "data_offset": 0, 00:17:13.693 "data_size": 65536 00:17:13.693 }, 00:17:13.693 { 00:17:13.693 "name": null, 00:17:13.693 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:13.693 "is_configured": false, 00:17:13.693 "data_offset": 0, 00:17:13.693 "data_size": 65536 00:17:13.693 }, 00:17:13.693 { 00:17:13.693 "name": "BaseBdev3", 00:17:13.693 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:13.693 "is_configured": true, 00:17:13.693 "data_offset": 0, 00:17:13.693 "data_size": 65536 00:17:13.693 }, 00:17:13.693 { 00:17:13.693 "name": "BaseBdev4", 00:17:13.693 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:13.693 "is_configured": true, 00:17:13.693 "data_offset": 0, 00:17:13.693 "data_size": 65536 00:17:13.693 } 00:17:13.693 ] 00:17:13.693 }' 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.693 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.258 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.258 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.258 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.259 [2024-12-09 22:58:29.918999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.259 "name": "Existed_Raid", 00:17:14.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.259 "strip_size_kb": 64, 00:17:14.259 "state": "configuring", 00:17:14.259 "raid_level": "raid0", 00:17:14.259 "superblock": false, 00:17:14.259 "num_base_bdevs": 4, 00:17:14.259 "num_base_bdevs_discovered": 3, 00:17:14.259 "num_base_bdevs_operational": 4, 00:17:14.259 "base_bdevs_list": [ 00:17:14.259 { 00:17:14.259 "name": null, 00:17:14.259 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:14.259 "is_configured": false, 00:17:14.259 "data_offset": 0, 00:17:14.259 "data_size": 65536 00:17:14.259 }, 00:17:14.259 { 00:17:14.259 "name": "BaseBdev2", 00:17:14.259 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:14.259 "is_configured": true, 00:17:14.259 "data_offset": 0, 00:17:14.259 "data_size": 65536 00:17:14.259 }, 00:17:14.259 { 00:17:14.259 "name": "BaseBdev3", 00:17:14.259 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:14.259 "is_configured": true, 00:17:14.259 "data_offset": 0, 00:17:14.259 "data_size": 65536 00:17:14.259 }, 00:17:14.259 { 00:17:14.259 "name": "BaseBdev4", 00:17:14.259 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:14.259 "is_configured": true, 00:17:14.259 "data_offset": 0, 00:17:14.259 "data_size": 65536 00:17:14.259 } 00:17:14.259 ] 00:17:14.259 }' 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.259 22:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.516 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.516 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.516 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.516 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.516 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c528cb9c-0a92-4e4f-8fb9-681e6ff1c676 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.773 [2024-12-09 22:58:30.486773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:14.773 [2024-12-09 22:58:30.486919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:14.773 [2024-12-09 22:58:30.486957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:14.773 [2024-12-09 22:58:30.487274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:14.773 [2024-12-09 22:58:30.487450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:14.773 [2024-12-09 22:58:30.487488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:14.773 [2024-12-09 22:58:30.487790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.773 NewBaseBdev 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.773 [ 00:17:14.773 { 00:17:14.773 "name": "NewBaseBdev", 00:17:14.773 "aliases": [ 00:17:14.773 "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676" 00:17:14.773 ], 00:17:14.773 "product_name": "Malloc disk", 00:17:14.773 "block_size": 512, 00:17:14.773 "num_blocks": 65536, 00:17:14.773 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:14.773 "assigned_rate_limits": { 00:17:14.773 "rw_ios_per_sec": 0, 00:17:14.773 "rw_mbytes_per_sec": 0, 00:17:14.773 "r_mbytes_per_sec": 0, 00:17:14.773 "w_mbytes_per_sec": 0 00:17:14.773 }, 00:17:14.773 "claimed": true, 00:17:14.773 "claim_type": "exclusive_write", 00:17:14.773 "zoned": false, 00:17:14.773 "supported_io_types": { 00:17:14.773 "read": true, 00:17:14.773 "write": true, 00:17:14.773 "unmap": true, 00:17:14.773 "flush": true, 00:17:14.773 "reset": true, 00:17:14.773 "nvme_admin": false, 00:17:14.773 "nvme_io": false, 00:17:14.773 "nvme_io_md": false, 00:17:14.773 "write_zeroes": true, 00:17:14.773 "zcopy": true, 00:17:14.773 "get_zone_info": false, 00:17:14.773 "zone_management": false, 00:17:14.773 "zone_append": false, 00:17:14.773 "compare": false, 00:17:14.773 "compare_and_write": false, 00:17:14.773 "abort": true, 00:17:14.773 "seek_hole": false, 00:17:14.773 "seek_data": false, 00:17:14.773 "copy": true, 00:17:14.773 "nvme_iov_md": false 00:17:14.773 }, 00:17:14.773 "memory_domains": [ 00:17:14.773 { 00:17:14.773 "dma_device_id": "system", 00:17:14.773 "dma_device_type": 1 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.773 "dma_device_type": 2 00:17:14.773 } 00:17:14.773 ], 00:17:14.773 "driver_specific": {} 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.773 "name": "Existed_Raid", 00:17:14.773 "uuid": "ed8b8210-b68b-4867-97a6-db5261230c75", 00:17:14.773 "strip_size_kb": 64, 00:17:14.773 "state": "online", 00:17:14.773 "raid_level": "raid0", 00:17:14.773 "superblock": false, 00:17:14.773 "num_base_bdevs": 4, 00:17:14.773 "num_base_bdevs_discovered": 4, 00:17:14.773 "num_base_bdevs_operational": 4, 00:17:14.773 "base_bdevs_list": [ 00:17:14.773 { 00:17:14.773 "name": "NewBaseBdev", 00:17:14.773 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:14.773 "is_configured": true, 00:17:14.773 "data_offset": 0, 00:17:14.773 "data_size": 65536 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "name": "BaseBdev2", 00:17:14.773 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:14.773 "is_configured": true, 00:17:14.773 "data_offset": 0, 00:17:14.773 "data_size": 65536 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "name": "BaseBdev3", 00:17:14.773 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:14.773 "is_configured": true, 00:17:14.773 "data_offset": 0, 00:17:14.773 "data_size": 65536 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "name": "BaseBdev4", 00:17:14.773 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:14.773 "is_configured": true, 00:17:14.773 "data_offset": 0, 00:17:14.773 "data_size": 65536 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 }' 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.773 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.339 [2024-12-09 22:58:30.926645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.339 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.339 "name": "Existed_Raid", 00:17:15.339 "aliases": [ 00:17:15.339 "ed8b8210-b68b-4867-97a6-db5261230c75" 00:17:15.339 ], 00:17:15.339 "product_name": "Raid Volume", 00:17:15.339 "block_size": 512, 00:17:15.339 "num_blocks": 262144, 00:17:15.340 "uuid": "ed8b8210-b68b-4867-97a6-db5261230c75", 00:17:15.340 "assigned_rate_limits": { 00:17:15.340 "rw_ios_per_sec": 0, 00:17:15.340 "rw_mbytes_per_sec": 0, 00:17:15.340 "r_mbytes_per_sec": 0, 00:17:15.340 "w_mbytes_per_sec": 0 00:17:15.340 }, 00:17:15.340 "claimed": false, 00:17:15.340 "zoned": false, 00:17:15.340 "supported_io_types": { 00:17:15.340 "read": true, 00:17:15.340 "write": true, 00:17:15.340 "unmap": true, 00:17:15.340 "flush": true, 00:17:15.340 "reset": true, 00:17:15.340 "nvme_admin": false, 00:17:15.340 "nvme_io": false, 00:17:15.340 "nvme_io_md": false, 00:17:15.340 "write_zeroes": true, 00:17:15.340 "zcopy": false, 00:17:15.340 "get_zone_info": false, 00:17:15.340 "zone_management": false, 00:17:15.340 "zone_append": false, 00:17:15.340 "compare": false, 00:17:15.340 "compare_and_write": false, 00:17:15.340 "abort": false, 00:17:15.340 "seek_hole": false, 00:17:15.340 "seek_data": false, 00:17:15.340 "copy": false, 00:17:15.340 "nvme_iov_md": false 00:17:15.340 }, 00:17:15.340 "memory_domains": [ 00:17:15.340 { 00:17:15.340 "dma_device_id": "system", 00:17:15.340 "dma_device_type": 1 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.340 "dma_device_type": 2 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "system", 00:17:15.340 "dma_device_type": 1 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.340 "dma_device_type": 2 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "system", 00:17:15.340 "dma_device_type": 1 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.340 "dma_device_type": 2 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "system", 00:17:15.340 "dma_device_type": 1 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.340 "dma_device_type": 2 00:17:15.340 } 00:17:15.340 ], 00:17:15.340 "driver_specific": { 00:17:15.340 "raid": { 00:17:15.340 "uuid": "ed8b8210-b68b-4867-97a6-db5261230c75", 00:17:15.340 "strip_size_kb": 64, 00:17:15.340 "state": "online", 00:17:15.340 "raid_level": "raid0", 00:17:15.340 "superblock": false, 00:17:15.340 "num_base_bdevs": 4, 00:17:15.340 "num_base_bdevs_discovered": 4, 00:17:15.340 "num_base_bdevs_operational": 4, 00:17:15.340 "base_bdevs_list": [ 00:17:15.340 { 00:17:15.340 "name": "NewBaseBdev", 00:17:15.340 "uuid": "c528cb9c-0a92-4e4f-8fb9-681e6ff1c676", 00:17:15.340 "is_configured": true, 00:17:15.340 "data_offset": 0, 00:17:15.340 "data_size": 65536 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "name": "BaseBdev2", 00:17:15.340 "uuid": "89267232-bc55-480f-b5cf-01bc41384268", 00:17:15.340 "is_configured": true, 00:17:15.340 "data_offset": 0, 00:17:15.340 "data_size": 65536 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "name": "BaseBdev3", 00:17:15.340 "uuid": "63129368-8e64-43e2-a189-885307f98755", 00:17:15.340 "is_configured": true, 00:17:15.340 "data_offset": 0, 00:17:15.340 "data_size": 65536 00:17:15.340 }, 00:17:15.340 { 00:17:15.340 "name": "BaseBdev4", 00:17:15.340 "uuid": "8d4a4d55-f987-4a37-b913-a373b13fe704", 00:17:15.340 "is_configured": true, 00:17:15.340 "data_offset": 0, 00:17:15.340 "data_size": 65536 00:17:15.340 } 00:17:15.340 ] 00:17:15.340 } 00:17:15.340 } 00:17:15.340 }' 00:17:15.340 22:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:15.340 BaseBdev2 00:17:15.340 BaseBdev3 00:17:15.340 BaseBdev4' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.340 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.598 [2024-12-09 22:58:31.245660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.598 [2024-12-09 22:58:31.245698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.598 [2024-12-09 22:58:31.245791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.598 [2024-12-09 22:58:31.245870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.598 [2024-12-09 22:58:31.245882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:15.598 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69936 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69936 ']' 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69936 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69936 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.599 killing process with pid 69936 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69936' 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69936 00:17:15.599 22:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69936 00:17:15.599 [2024-12-09 22:58:31.274972] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.164 [2024-12-09 22:58:31.764559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.608 ************************************ 00:17:17.608 END TEST raid_state_function_test 00:17:17.608 ************************************ 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.608 00:17:17.608 real 0m12.080s 00:17:17.608 user 0m18.931s 00:17:17.608 sys 0m1.939s 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.608 22:58:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:17.608 22:58:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:17.608 22:58:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.608 22:58:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.608 ************************************ 00:17:17.608 START TEST raid_state_function_test_sb 00:17:17.608 ************************************ 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:17.608 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:17.609 Process raid pid: 70617 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70617 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70617' 00:17:17.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70617 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70617 ']' 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.609 22:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.609 [2024-12-09 22:58:33.268634] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:17.609 [2024-12-09 22:58:33.268924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.867 [2024-12-09 22:58:33.464165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.867 [2024-12-09 22:58:33.604704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.126 [2024-12-09 22:58:33.852822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.126 [2024-12-09 22:58:33.852930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.385 [2024-12-09 22:58:34.192208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.385 [2024-12-09 22:58:34.192274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.385 [2024-12-09 22:58:34.192287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.385 [2024-12-09 22:58:34.192299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.385 [2024-12-09 22:58:34.192306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.385 [2024-12-09 22:58:34.192317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.385 [2024-12-09 22:58:34.192325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.385 [2024-12-09 22:58:34.192335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.385 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.644 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.644 "name": "Existed_Raid", 00:17:18.644 "uuid": "d2bcb349-8507-4df1-a5cc-cb54e0f3f3ea", 00:17:18.644 "strip_size_kb": 64, 00:17:18.644 "state": "configuring", 00:17:18.644 "raid_level": "raid0", 00:17:18.644 "superblock": true, 00:17:18.644 "num_base_bdevs": 4, 00:17:18.644 "num_base_bdevs_discovered": 0, 00:17:18.644 "num_base_bdevs_operational": 4, 00:17:18.644 "base_bdevs_list": [ 00:17:18.644 { 00:17:18.644 "name": "BaseBdev1", 00:17:18.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.644 "is_configured": false, 00:17:18.644 "data_offset": 0, 00:17:18.644 "data_size": 0 00:17:18.644 }, 00:17:18.644 { 00:17:18.644 "name": "BaseBdev2", 00:17:18.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.644 "is_configured": false, 00:17:18.644 "data_offset": 0, 00:17:18.644 "data_size": 0 00:17:18.644 }, 00:17:18.644 { 00:17:18.644 "name": "BaseBdev3", 00:17:18.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.644 "is_configured": false, 00:17:18.644 "data_offset": 0, 00:17:18.644 "data_size": 0 00:17:18.644 }, 00:17:18.644 { 00:17:18.644 "name": "BaseBdev4", 00:17:18.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.644 "is_configured": false, 00:17:18.644 "data_offset": 0, 00:17:18.644 "data_size": 0 00:17:18.644 } 00:17:18.644 ] 00:17:18.644 }' 00:17:18.644 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.644 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 [2024-12-09 22:58:34.623441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.903 [2024-12-09 22:58:34.623503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 [2024-12-09 22:58:34.635480] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.903 [2024-12-09 22:58:34.635532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.903 [2024-12-09 22:58:34.635543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.903 [2024-12-09 22:58:34.635555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.903 [2024-12-09 22:58:34.635562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.903 [2024-12-09 22:58:34.635573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.903 [2024-12-09 22:58:34.635580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.903 [2024-12-09 22:58:34.635591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 [2024-12-09 22:58:34.686889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.903 BaseBdev1 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.903 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 [ 00:17:18.903 { 00:17:18.903 "name": "BaseBdev1", 00:17:18.903 "aliases": [ 00:17:18.903 "472e3ee0-693b-4dda-9985-dce891411d2c" 00:17:18.903 ], 00:17:18.903 "product_name": "Malloc disk", 00:17:18.903 "block_size": 512, 00:17:18.903 "num_blocks": 65536, 00:17:18.903 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:18.903 "assigned_rate_limits": { 00:17:18.903 "rw_ios_per_sec": 0, 00:17:18.903 "rw_mbytes_per_sec": 0, 00:17:18.903 "r_mbytes_per_sec": 0, 00:17:18.903 "w_mbytes_per_sec": 0 00:17:18.903 }, 00:17:18.903 "claimed": true, 00:17:18.903 "claim_type": "exclusive_write", 00:17:18.903 "zoned": false, 00:17:18.903 "supported_io_types": { 00:17:18.903 "read": true, 00:17:18.903 "write": true, 00:17:18.903 "unmap": true, 00:17:18.903 "flush": true, 00:17:18.903 "reset": true, 00:17:18.903 "nvme_admin": false, 00:17:18.903 "nvme_io": false, 00:17:18.903 "nvme_io_md": false, 00:17:18.904 "write_zeroes": true, 00:17:18.904 "zcopy": true, 00:17:18.904 "get_zone_info": false, 00:17:18.904 "zone_management": false, 00:17:18.904 "zone_append": false, 00:17:18.904 "compare": false, 00:17:18.904 "compare_and_write": false, 00:17:18.904 "abort": true, 00:17:18.904 "seek_hole": false, 00:17:18.904 "seek_data": false, 00:17:18.904 "copy": true, 00:17:18.904 "nvme_iov_md": false 00:17:18.904 }, 00:17:18.904 "memory_domains": [ 00:17:18.904 { 00:17:18.904 "dma_device_id": "system", 00:17:18.904 "dma_device_type": 1 00:17:18.904 }, 00:17:18.904 { 00:17:18.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.904 "dma_device_type": 2 00:17:18.904 } 00:17:18.904 ], 00:17:18.904 "driver_specific": {} 00:17:18.904 } 00:17:18.904 ] 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.904 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.163 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.163 "name": "Existed_Raid", 00:17:19.163 "uuid": "04dd8e2a-6cb1-4036-a404-cee1ae15b63d", 00:17:19.163 "strip_size_kb": 64, 00:17:19.163 "state": "configuring", 00:17:19.163 "raid_level": "raid0", 00:17:19.163 "superblock": true, 00:17:19.163 "num_base_bdevs": 4, 00:17:19.163 "num_base_bdevs_discovered": 1, 00:17:19.163 "num_base_bdevs_operational": 4, 00:17:19.163 "base_bdevs_list": [ 00:17:19.163 { 00:17:19.163 "name": "BaseBdev1", 00:17:19.163 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:19.163 "is_configured": true, 00:17:19.163 "data_offset": 2048, 00:17:19.163 "data_size": 63488 00:17:19.163 }, 00:17:19.163 { 00:17:19.163 "name": "BaseBdev2", 00:17:19.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.163 "is_configured": false, 00:17:19.163 "data_offset": 0, 00:17:19.163 "data_size": 0 00:17:19.163 }, 00:17:19.163 { 00:17:19.163 "name": "BaseBdev3", 00:17:19.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.163 "is_configured": false, 00:17:19.163 "data_offset": 0, 00:17:19.163 "data_size": 0 00:17:19.163 }, 00:17:19.163 { 00:17:19.163 "name": "BaseBdev4", 00:17:19.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.163 "is_configured": false, 00:17:19.163 "data_offset": 0, 00:17:19.163 "data_size": 0 00:17:19.163 } 00:17:19.163 ] 00:17:19.163 }' 00:17:19.163 22:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.163 22:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.426 [2024-12-09 22:58:35.158248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.426 [2024-12-09 22:58:35.158377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.426 [2024-12-09 22:58:35.170316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.426 [2024-12-09 22:58:35.172496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.426 [2024-12-09 22:58:35.172594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.426 [2024-12-09 22:58:35.172647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.426 [2024-12-09 22:58:35.172701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.426 [2024-12-09 22:58:35.172749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.426 [2024-12-09 22:58:35.172798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.426 "name": "Existed_Raid", 00:17:19.426 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:19.426 "strip_size_kb": 64, 00:17:19.426 "state": "configuring", 00:17:19.426 "raid_level": "raid0", 00:17:19.426 "superblock": true, 00:17:19.426 "num_base_bdevs": 4, 00:17:19.426 "num_base_bdevs_discovered": 1, 00:17:19.426 "num_base_bdevs_operational": 4, 00:17:19.426 "base_bdevs_list": [ 00:17:19.426 { 00:17:19.426 "name": "BaseBdev1", 00:17:19.426 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:19.426 "is_configured": true, 00:17:19.426 "data_offset": 2048, 00:17:19.426 "data_size": 63488 00:17:19.426 }, 00:17:19.426 { 00:17:19.426 "name": "BaseBdev2", 00:17:19.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.426 "is_configured": false, 00:17:19.426 "data_offset": 0, 00:17:19.426 "data_size": 0 00:17:19.426 }, 00:17:19.426 { 00:17:19.426 "name": "BaseBdev3", 00:17:19.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.426 "is_configured": false, 00:17:19.426 "data_offset": 0, 00:17:19.426 "data_size": 0 00:17:19.426 }, 00:17:19.426 { 00:17:19.426 "name": "BaseBdev4", 00:17:19.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.426 "is_configured": false, 00:17:19.426 "data_offset": 0, 00:17:19.426 "data_size": 0 00:17:19.426 } 00:17:19.426 ] 00:17:19.426 }' 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.426 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.995 [2024-12-09 22:58:35.654900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.995 BaseBdev2 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.995 [ 00:17:19.995 { 00:17:19.995 "name": "BaseBdev2", 00:17:19.995 "aliases": [ 00:17:19.995 "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91" 00:17:19.995 ], 00:17:19.995 "product_name": "Malloc disk", 00:17:19.995 "block_size": 512, 00:17:19.995 "num_blocks": 65536, 00:17:19.995 "uuid": "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91", 00:17:19.995 "assigned_rate_limits": { 00:17:19.995 "rw_ios_per_sec": 0, 00:17:19.995 "rw_mbytes_per_sec": 0, 00:17:19.995 "r_mbytes_per_sec": 0, 00:17:19.995 "w_mbytes_per_sec": 0 00:17:19.995 }, 00:17:19.995 "claimed": true, 00:17:19.995 "claim_type": "exclusive_write", 00:17:19.995 "zoned": false, 00:17:19.995 "supported_io_types": { 00:17:19.995 "read": true, 00:17:19.995 "write": true, 00:17:19.995 "unmap": true, 00:17:19.995 "flush": true, 00:17:19.995 "reset": true, 00:17:19.995 "nvme_admin": false, 00:17:19.995 "nvme_io": false, 00:17:19.995 "nvme_io_md": false, 00:17:19.995 "write_zeroes": true, 00:17:19.995 "zcopy": true, 00:17:19.995 "get_zone_info": false, 00:17:19.995 "zone_management": false, 00:17:19.995 "zone_append": false, 00:17:19.995 "compare": false, 00:17:19.995 "compare_and_write": false, 00:17:19.995 "abort": true, 00:17:19.995 "seek_hole": false, 00:17:19.995 "seek_data": false, 00:17:19.995 "copy": true, 00:17:19.995 "nvme_iov_md": false 00:17:19.995 }, 00:17:19.995 "memory_domains": [ 00:17:19.995 { 00:17:19.995 "dma_device_id": "system", 00:17:19.995 "dma_device_type": 1 00:17:19.995 }, 00:17:19.995 { 00:17:19.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.995 "dma_device_type": 2 00:17:19.995 } 00:17:19.995 ], 00:17:19.995 "driver_specific": {} 00:17:19.995 } 00:17:19.995 ] 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.995 "name": "Existed_Raid", 00:17:19.995 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:19.995 "strip_size_kb": 64, 00:17:19.995 "state": "configuring", 00:17:19.995 "raid_level": "raid0", 00:17:19.995 "superblock": true, 00:17:19.995 "num_base_bdevs": 4, 00:17:19.995 "num_base_bdevs_discovered": 2, 00:17:19.995 "num_base_bdevs_operational": 4, 00:17:19.995 "base_bdevs_list": [ 00:17:19.995 { 00:17:19.995 "name": "BaseBdev1", 00:17:19.995 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:19.995 "is_configured": true, 00:17:19.995 "data_offset": 2048, 00:17:19.995 "data_size": 63488 00:17:19.995 }, 00:17:19.995 { 00:17:19.995 "name": "BaseBdev2", 00:17:19.995 "uuid": "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91", 00:17:19.995 "is_configured": true, 00:17:19.995 "data_offset": 2048, 00:17:19.995 "data_size": 63488 00:17:19.995 }, 00:17:19.995 { 00:17:19.995 "name": "BaseBdev3", 00:17:19.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.995 "is_configured": false, 00:17:19.995 "data_offset": 0, 00:17:19.995 "data_size": 0 00:17:19.995 }, 00:17:19.995 { 00:17:19.995 "name": "BaseBdev4", 00:17:19.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.995 "is_configured": false, 00:17:19.995 "data_offset": 0, 00:17:19.995 "data_size": 0 00:17:19.995 } 00:17:19.995 ] 00:17:19.995 }' 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.995 22:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 [2024-12-09 22:58:36.190880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.565 BaseBdev3 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 [ 00:17:20.565 { 00:17:20.565 "name": "BaseBdev3", 00:17:20.565 "aliases": [ 00:17:20.565 "32505613-64f6-433e-9b1c-a014cdab7bab" 00:17:20.565 ], 00:17:20.565 "product_name": "Malloc disk", 00:17:20.565 "block_size": 512, 00:17:20.565 "num_blocks": 65536, 00:17:20.565 "uuid": "32505613-64f6-433e-9b1c-a014cdab7bab", 00:17:20.565 "assigned_rate_limits": { 00:17:20.565 "rw_ios_per_sec": 0, 00:17:20.565 "rw_mbytes_per_sec": 0, 00:17:20.565 "r_mbytes_per_sec": 0, 00:17:20.565 "w_mbytes_per_sec": 0 00:17:20.565 }, 00:17:20.565 "claimed": true, 00:17:20.565 "claim_type": "exclusive_write", 00:17:20.565 "zoned": false, 00:17:20.565 "supported_io_types": { 00:17:20.565 "read": true, 00:17:20.565 "write": true, 00:17:20.565 "unmap": true, 00:17:20.565 "flush": true, 00:17:20.565 "reset": true, 00:17:20.565 "nvme_admin": false, 00:17:20.565 "nvme_io": false, 00:17:20.565 "nvme_io_md": false, 00:17:20.565 "write_zeroes": true, 00:17:20.565 "zcopy": true, 00:17:20.565 "get_zone_info": false, 00:17:20.565 "zone_management": false, 00:17:20.565 "zone_append": false, 00:17:20.565 "compare": false, 00:17:20.565 "compare_and_write": false, 00:17:20.565 "abort": true, 00:17:20.565 "seek_hole": false, 00:17:20.565 "seek_data": false, 00:17:20.565 "copy": true, 00:17:20.565 "nvme_iov_md": false 00:17:20.565 }, 00:17:20.565 "memory_domains": [ 00:17:20.565 { 00:17:20.565 "dma_device_id": "system", 00:17:20.565 "dma_device_type": 1 00:17:20.565 }, 00:17:20.565 { 00:17:20.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.565 "dma_device_type": 2 00:17:20.565 } 00:17:20.565 ], 00:17:20.565 "driver_specific": {} 00:17:20.565 } 00:17:20.565 ] 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.565 "name": "Existed_Raid", 00:17:20.565 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:20.565 "strip_size_kb": 64, 00:17:20.565 "state": "configuring", 00:17:20.565 "raid_level": "raid0", 00:17:20.565 "superblock": true, 00:17:20.565 "num_base_bdevs": 4, 00:17:20.565 "num_base_bdevs_discovered": 3, 00:17:20.565 "num_base_bdevs_operational": 4, 00:17:20.565 "base_bdevs_list": [ 00:17:20.565 { 00:17:20.565 "name": "BaseBdev1", 00:17:20.565 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:20.565 "is_configured": true, 00:17:20.565 "data_offset": 2048, 00:17:20.565 "data_size": 63488 00:17:20.565 }, 00:17:20.565 { 00:17:20.565 "name": "BaseBdev2", 00:17:20.565 "uuid": "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91", 00:17:20.565 "is_configured": true, 00:17:20.565 "data_offset": 2048, 00:17:20.565 "data_size": 63488 00:17:20.565 }, 00:17:20.565 { 00:17:20.565 "name": "BaseBdev3", 00:17:20.565 "uuid": "32505613-64f6-433e-9b1c-a014cdab7bab", 00:17:20.565 "is_configured": true, 00:17:20.565 "data_offset": 2048, 00:17:20.565 "data_size": 63488 00:17:20.565 }, 00:17:20.565 { 00:17:20.565 "name": "BaseBdev4", 00:17:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.565 "is_configured": false, 00:17:20.565 "data_offset": 0, 00:17:20.565 "data_size": 0 00:17:20.565 } 00:17:20.565 ] 00:17:20.565 }' 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.565 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.825 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:20.825 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.825 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.083 [2024-12-09 22:58:36.700651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:21.083 [2024-12-09 22:58:36.701067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:21.083 [2024-12-09 22:58:36.701129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:21.083 BaseBdev4 00:17:21.083 [2024-12-09 22:58:36.701485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.083 [2024-12-09 22:58:36.701699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.083 [2024-12-09 22:58:36.701749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:21.083 [2024-12-09 22:58:36.701932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.083 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.083 [ 00:17:21.083 { 00:17:21.083 "name": "BaseBdev4", 00:17:21.084 "aliases": [ 00:17:21.084 "99020f62-5a20-4665-b934-03b88dc7e246" 00:17:21.084 ], 00:17:21.084 "product_name": "Malloc disk", 00:17:21.084 "block_size": 512, 00:17:21.084 "num_blocks": 65536, 00:17:21.084 "uuid": "99020f62-5a20-4665-b934-03b88dc7e246", 00:17:21.084 "assigned_rate_limits": { 00:17:21.084 "rw_ios_per_sec": 0, 00:17:21.084 "rw_mbytes_per_sec": 0, 00:17:21.084 "r_mbytes_per_sec": 0, 00:17:21.084 "w_mbytes_per_sec": 0 00:17:21.084 }, 00:17:21.084 "claimed": true, 00:17:21.084 "claim_type": "exclusive_write", 00:17:21.084 "zoned": false, 00:17:21.084 "supported_io_types": { 00:17:21.084 "read": true, 00:17:21.084 "write": true, 00:17:21.084 "unmap": true, 00:17:21.084 "flush": true, 00:17:21.084 "reset": true, 00:17:21.084 "nvme_admin": false, 00:17:21.084 "nvme_io": false, 00:17:21.084 "nvme_io_md": false, 00:17:21.084 "write_zeroes": true, 00:17:21.084 "zcopy": true, 00:17:21.084 "get_zone_info": false, 00:17:21.084 "zone_management": false, 00:17:21.084 "zone_append": false, 00:17:21.084 "compare": false, 00:17:21.084 "compare_and_write": false, 00:17:21.084 "abort": true, 00:17:21.084 "seek_hole": false, 00:17:21.084 "seek_data": false, 00:17:21.084 "copy": true, 00:17:21.084 "nvme_iov_md": false 00:17:21.084 }, 00:17:21.084 "memory_domains": [ 00:17:21.084 { 00:17:21.084 "dma_device_id": "system", 00:17:21.084 "dma_device_type": 1 00:17:21.084 }, 00:17:21.084 { 00:17:21.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.084 "dma_device_type": 2 00:17:21.084 } 00:17:21.084 ], 00:17:21.084 "driver_specific": {} 00:17:21.084 } 00:17:21.084 ] 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.084 "name": "Existed_Raid", 00:17:21.084 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:21.084 "strip_size_kb": 64, 00:17:21.084 "state": "online", 00:17:21.084 "raid_level": "raid0", 00:17:21.084 "superblock": true, 00:17:21.084 "num_base_bdevs": 4, 00:17:21.084 "num_base_bdevs_discovered": 4, 00:17:21.084 "num_base_bdevs_operational": 4, 00:17:21.084 "base_bdevs_list": [ 00:17:21.084 { 00:17:21.084 "name": "BaseBdev1", 00:17:21.084 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:21.084 "is_configured": true, 00:17:21.084 "data_offset": 2048, 00:17:21.084 "data_size": 63488 00:17:21.084 }, 00:17:21.084 { 00:17:21.084 "name": "BaseBdev2", 00:17:21.084 "uuid": "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91", 00:17:21.084 "is_configured": true, 00:17:21.084 "data_offset": 2048, 00:17:21.084 "data_size": 63488 00:17:21.084 }, 00:17:21.084 { 00:17:21.084 "name": "BaseBdev3", 00:17:21.084 "uuid": "32505613-64f6-433e-9b1c-a014cdab7bab", 00:17:21.084 "is_configured": true, 00:17:21.084 "data_offset": 2048, 00:17:21.084 "data_size": 63488 00:17:21.084 }, 00:17:21.084 { 00:17:21.084 "name": "BaseBdev4", 00:17:21.084 "uuid": "99020f62-5a20-4665-b934-03b88dc7e246", 00:17:21.084 "is_configured": true, 00:17:21.084 "data_offset": 2048, 00:17:21.084 "data_size": 63488 00:17:21.084 } 00:17:21.084 ] 00:17:21.084 }' 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.084 22:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.343 [2024-12-09 22:58:37.164984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.343 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.602 "name": "Existed_Raid", 00:17:21.602 "aliases": [ 00:17:21.602 "681dd69c-e620-42c0-8b66-0398d2646bbb" 00:17:21.602 ], 00:17:21.602 "product_name": "Raid Volume", 00:17:21.602 "block_size": 512, 00:17:21.602 "num_blocks": 253952, 00:17:21.602 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:21.602 "assigned_rate_limits": { 00:17:21.602 "rw_ios_per_sec": 0, 00:17:21.602 "rw_mbytes_per_sec": 0, 00:17:21.602 "r_mbytes_per_sec": 0, 00:17:21.602 "w_mbytes_per_sec": 0 00:17:21.602 }, 00:17:21.602 "claimed": false, 00:17:21.602 "zoned": false, 00:17:21.602 "supported_io_types": { 00:17:21.602 "read": true, 00:17:21.602 "write": true, 00:17:21.602 "unmap": true, 00:17:21.602 "flush": true, 00:17:21.602 "reset": true, 00:17:21.602 "nvme_admin": false, 00:17:21.602 "nvme_io": false, 00:17:21.602 "nvme_io_md": false, 00:17:21.602 "write_zeroes": true, 00:17:21.602 "zcopy": false, 00:17:21.602 "get_zone_info": false, 00:17:21.602 "zone_management": false, 00:17:21.602 "zone_append": false, 00:17:21.602 "compare": false, 00:17:21.602 "compare_and_write": false, 00:17:21.602 "abort": false, 00:17:21.602 "seek_hole": false, 00:17:21.602 "seek_data": false, 00:17:21.602 "copy": false, 00:17:21.602 "nvme_iov_md": false 00:17:21.602 }, 00:17:21.602 "memory_domains": [ 00:17:21.602 { 00:17:21.602 "dma_device_id": "system", 00:17:21.602 "dma_device_type": 1 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.602 "dma_device_type": 2 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "system", 00:17:21.602 "dma_device_type": 1 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.602 "dma_device_type": 2 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "system", 00:17:21.602 "dma_device_type": 1 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.602 "dma_device_type": 2 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "system", 00:17:21.602 "dma_device_type": 1 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.602 "dma_device_type": 2 00:17:21.602 } 00:17:21.602 ], 00:17:21.602 "driver_specific": { 00:17:21.602 "raid": { 00:17:21.602 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:21.602 "strip_size_kb": 64, 00:17:21.602 "state": "online", 00:17:21.602 "raid_level": "raid0", 00:17:21.602 "superblock": true, 00:17:21.602 "num_base_bdevs": 4, 00:17:21.602 "num_base_bdevs_discovered": 4, 00:17:21.602 "num_base_bdevs_operational": 4, 00:17:21.602 "base_bdevs_list": [ 00:17:21.602 { 00:17:21.602 "name": "BaseBdev1", 00:17:21.602 "uuid": "472e3ee0-693b-4dda-9985-dce891411d2c", 00:17:21.602 "is_configured": true, 00:17:21.602 "data_offset": 2048, 00:17:21.602 "data_size": 63488 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "name": "BaseBdev2", 00:17:21.602 "uuid": "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91", 00:17:21.602 "is_configured": true, 00:17:21.602 "data_offset": 2048, 00:17:21.602 "data_size": 63488 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "name": "BaseBdev3", 00:17:21.602 "uuid": "32505613-64f6-433e-9b1c-a014cdab7bab", 00:17:21.602 "is_configured": true, 00:17:21.602 "data_offset": 2048, 00:17:21.602 "data_size": 63488 00:17:21.602 }, 00:17:21.602 { 00:17:21.602 "name": "BaseBdev4", 00:17:21.602 "uuid": "99020f62-5a20-4665-b934-03b88dc7e246", 00:17:21.602 "is_configured": true, 00:17:21.602 "data_offset": 2048, 00:17:21.602 "data_size": 63488 00:17:21.602 } 00:17:21.602 ] 00:17:21.602 } 00:17:21.602 } 00:17:21.602 }' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:21.602 BaseBdev2 00:17:21.602 BaseBdev3 00:17:21.602 BaseBdev4' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:21.602 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.603 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.603 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.862 [2024-12-09 22:58:37.488359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.862 [2024-12-09 22:58:37.488473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.862 [2024-12-09 22:58:37.488545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.862 "name": "Existed_Raid", 00:17:21.862 "uuid": "681dd69c-e620-42c0-8b66-0398d2646bbb", 00:17:21.862 "strip_size_kb": 64, 00:17:21.862 "state": "offline", 00:17:21.862 "raid_level": "raid0", 00:17:21.862 "superblock": true, 00:17:21.862 "num_base_bdevs": 4, 00:17:21.862 "num_base_bdevs_discovered": 3, 00:17:21.862 "num_base_bdevs_operational": 3, 00:17:21.862 "base_bdevs_list": [ 00:17:21.862 { 00:17:21.862 "name": null, 00:17:21.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.862 "is_configured": false, 00:17:21.862 "data_offset": 0, 00:17:21.862 "data_size": 63488 00:17:21.862 }, 00:17:21.862 { 00:17:21.862 "name": "BaseBdev2", 00:17:21.862 "uuid": "6d199c5f-7e87-45e3-8a7a-1de05ae7fa91", 00:17:21.862 "is_configured": true, 00:17:21.862 "data_offset": 2048, 00:17:21.862 "data_size": 63488 00:17:21.862 }, 00:17:21.862 { 00:17:21.862 "name": "BaseBdev3", 00:17:21.862 "uuid": "32505613-64f6-433e-9b1c-a014cdab7bab", 00:17:21.862 "is_configured": true, 00:17:21.862 "data_offset": 2048, 00:17:21.862 "data_size": 63488 00:17:21.862 }, 00:17:21.862 { 00:17:21.862 "name": "BaseBdev4", 00:17:21.862 "uuid": "99020f62-5a20-4665-b934-03b88dc7e246", 00:17:21.862 "is_configured": true, 00:17:21.862 "data_offset": 2048, 00:17:21.862 "data_size": 63488 00:17:21.862 } 00:17:21.862 ] 00:17:21.862 }' 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.862 22:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.430 [2024-12-09 22:58:38.110911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:22.430 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.431 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.431 [2024-12-09 22:58:38.278875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.689 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.689 [2024-12-09 22:58:38.442526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:22.689 [2024-12-09 22:58:38.442584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 BaseBdev2 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 [ 00:17:22.948 { 00:17:22.948 "name": "BaseBdev2", 00:17:22.948 "aliases": [ 00:17:22.948 "def589b2-a638-48f9-b156-d6038bb2f929" 00:17:22.948 ], 00:17:22.948 "product_name": "Malloc disk", 00:17:22.948 "block_size": 512, 00:17:22.948 "num_blocks": 65536, 00:17:22.948 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:22.948 "assigned_rate_limits": { 00:17:22.948 "rw_ios_per_sec": 0, 00:17:22.948 "rw_mbytes_per_sec": 0, 00:17:22.948 "r_mbytes_per_sec": 0, 00:17:22.948 "w_mbytes_per_sec": 0 00:17:22.948 }, 00:17:22.948 "claimed": false, 00:17:22.948 "zoned": false, 00:17:22.948 "supported_io_types": { 00:17:22.948 "read": true, 00:17:22.948 "write": true, 00:17:22.948 "unmap": true, 00:17:22.948 "flush": true, 00:17:22.948 "reset": true, 00:17:22.948 "nvme_admin": false, 00:17:22.948 "nvme_io": false, 00:17:22.948 "nvme_io_md": false, 00:17:22.948 "write_zeroes": true, 00:17:22.948 "zcopy": true, 00:17:22.948 "get_zone_info": false, 00:17:22.948 "zone_management": false, 00:17:22.948 "zone_append": false, 00:17:22.948 "compare": false, 00:17:22.948 "compare_and_write": false, 00:17:22.948 "abort": true, 00:17:22.948 "seek_hole": false, 00:17:22.948 "seek_data": false, 00:17:22.948 "copy": true, 00:17:22.948 "nvme_iov_md": false 00:17:22.948 }, 00:17:22.948 "memory_domains": [ 00:17:22.948 { 00:17:22.948 "dma_device_id": "system", 00:17:22.948 "dma_device_type": 1 00:17:22.948 }, 00:17:22.948 { 00:17:22.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.948 "dma_device_type": 2 00:17:22.948 } 00:17:22.948 ], 00:17:22.948 "driver_specific": {} 00:17:22.948 } 00:17:22.948 ] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 BaseBdev3 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.948 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.948 [ 00:17:22.948 { 00:17:22.948 "name": "BaseBdev3", 00:17:22.948 "aliases": [ 00:17:22.948 "0e0f30f5-12b1-474b-b323-a615b69e754e" 00:17:22.948 ], 00:17:22.948 "product_name": "Malloc disk", 00:17:22.948 "block_size": 512, 00:17:22.948 "num_blocks": 65536, 00:17:22.948 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:22.948 "assigned_rate_limits": { 00:17:22.948 "rw_ios_per_sec": 0, 00:17:22.948 "rw_mbytes_per_sec": 0, 00:17:22.948 "r_mbytes_per_sec": 0, 00:17:22.948 "w_mbytes_per_sec": 0 00:17:22.948 }, 00:17:22.948 "claimed": false, 00:17:22.948 "zoned": false, 00:17:22.948 "supported_io_types": { 00:17:22.948 "read": true, 00:17:22.948 "write": true, 00:17:22.948 "unmap": true, 00:17:22.948 "flush": true, 00:17:22.948 "reset": true, 00:17:22.948 "nvme_admin": false, 00:17:22.948 "nvme_io": false, 00:17:22.948 "nvme_io_md": false, 00:17:22.948 "write_zeroes": true, 00:17:22.948 "zcopy": true, 00:17:22.948 "get_zone_info": false, 00:17:22.948 "zone_management": false, 00:17:22.948 "zone_append": false, 00:17:22.948 "compare": false, 00:17:22.948 "compare_and_write": false, 00:17:22.948 "abort": true, 00:17:22.948 "seek_hole": false, 00:17:22.948 "seek_data": false, 00:17:22.948 "copy": true, 00:17:22.948 "nvme_iov_md": false 00:17:22.948 }, 00:17:22.948 "memory_domains": [ 00:17:22.948 { 00:17:22.948 "dma_device_id": "system", 00:17:22.948 "dma_device_type": 1 00:17:22.948 }, 00:17:22.948 { 00:17:22.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.949 "dma_device_type": 2 00:17:22.949 } 00:17:22.949 ], 00:17:22.949 "driver_specific": {} 00:17:22.949 } 00:17:22.949 ] 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.949 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.207 BaseBdev4 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.207 [ 00:17:23.207 { 00:17:23.207 "name": "BaseBdev4", 00:17:23.207 "aliases": [ 00:17:23.207 "532ca28c-7822-4084-a3ca-877b4bb2c45a" 00:17:23.207 ], 00:17:23.207 "product_name": "Malloc disk", 00:17:23.207 "block_size": 512, 00:17:23.207 "num_blocks": 65536, 00:17:23.207 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:23.207 "assigned_rate_limits": { 00:17:23.207 "rw_ios_per_sec": 0, 00:17:23.207 "rw_mbytes_per_sec": 0, 00:17:23.207 "r_mbytes_per_sec": 0, 00:17:23.207 "w_mbytes_per_sec": 0 00:17:23.207 }, 00:17:23.207 "claimed": false, 00:17:23.207 "zoned": false, 00:17:23.207 "supported_io_types": { 00:17:23.207 "read": true, 00:17:23.207 "write": true, 00:17:23.207 "unmap": true, 00:17:23.207 "flush": true, 00:17:23.207 "reset": true, 00:17:23.207 "nvme_admin": false, 00:17:23.207 "nvme_io": false, 00:17:23.207 "nvme_io_md": false, 00:17:23.207 "write_zeroes": true, 00:17:23.207 "zcopy": true, 00:17:23.207 "get_zone_info": false, 00:17:23.207 "zone_management": false, 00:17:23.207 "zone_append": false, 00:17:23.207 "compare": false, 00:17:23.207 "compare_and_write": false, 00:17:23.207 "abort": true, 00:17:23.207 "seek_hole": false, 00:17:23.207 "seek_data": false, 00:17:23.207 "copy": true, 00:17:23.207 "nvme_iov_md": false 00:17:23.207 }, 00:17:23.207 "memory_domains": [ 00:17:23.207 { 00:17:23.207 "dma_device_id": "system", 00:17:23.207 "dma_device_type": 1 00:17:23.207 }, 00:17:23.207 { 00:17:23.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.207 "dma_device_type": 2 00:17:23.207 } 00:17:23.207 ], 00:17:23.207 "driver_specific": {} 00:17:23.207 } 00:17:23.207 ] 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.207 [2024-12-09 22:58:38.835131] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.207 [2024-12-09 22:58:38.835185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.207 [2024-12-09 22:58:38.835214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.207 [2024-12-09 22:58:38.837428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.207 [2024-12-09 22:58:38.837568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.207 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.208 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.208 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.208 "name": "Existed_Raid", 00:17:23.208 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:23.208 "strip_size_kb": 64, 00:17:23.208 "state": "configuring", 00:17:23.208 "raid_level": "raid0", 00:17:23.208 "superblock": true, 00:17:23.208 "num_base_bdevs": 4, 00:17:23.208 "num_base_bdevs_discovered": 3, 00:17:23.208 "num_base_bdevs_operational": 4, 00:17:23.208 "base_bdevs_list": [ 00:17:23.208 { 00:17:23.208 "name": "BaseBdev1", 00:17:23.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.208 "is_configured": false, 00:17:23.208 "data_offset": 0, 00:17:23.208 "data_size": 0 00:17:23.208 }, 00:17:23.208 { 00:17:23.208 "name": "BaseBdev2", 00:17:23.208 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:23.208 "is_configured": true, 00:17:23.208 "data_offset": 2048, 00:17:23.208 "data_size": 63488 00:17:23.208 }, 00:17:23.208 { 00:17:23.208 "name": "BaseBdev3", 00:17:23.208 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:23.208 "is_configured": true, 00:17:23.208 "data_offset": 2048, 00:17:23.208 "data_size": 63488 00:17:23.208 }, 00:17:23.208 { 00:17:23.208 "name": "BaseBdev4", 00:17:23.208 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:23.208 "is_configured": true, 00:17:23.208 "data_offset": 2048, 00:17:23.208 "data_size": 63488 00:17:23.208 } 00:17:23.208 ] 00:17:23.208 }' 00:17:23.208 22:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.208 22:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.467 [2024-12-09 22:58:39.258481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.467 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.467 "name": "Existed_Raid", 00:17:23.467 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:23.467 "strip_size_kb": 64, 00:17:23.467 "state": "configuring", 00:17:23.467 "raid_level": "raid0", 00:17:23.467 "superblock": true, 00:17:23.467 "num_base_bdevs": 4, 00:17:23.467 "num_base_bdevs_discovered": 2, 00:17:23.467 "num_base_bdevs_operational": 4, 00:17:23.467 "base_bdevs_list": [ 00:17:23.467 { 00:17:23.467 "name": "BaseBdev1", 00:17:23.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.467 "is_configured": false, 00:17:23.467 "data_offset": 0, 00:17:23.467 "data_size": 0 00:17:23.467 }, 00:17:23.467 { 00:17:23.467 "name": null, 00:17:23.467 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:23.467 "is_configured": false, 00:17:23.467 "data_offset": 0, 00:17:23.467 "data_size": 63488 00:17:23.467 }, 00:17:23.467 { 00:17:23.467 "name": "BaseBdev3", 00:17:23.467 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:23.468 "is_configured": true, 00:17:23.468 "data_offset": 2048, 00:17:23.468 "data_size": 63488 00:17:23.468 }, 00:17:23.468 { 00:17:23.468 "name": "BaseBdev4", 00:17:23.468 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:23.468 "is_configured": true, 00:17:23.468 "data_offset": 2048, 00:17:23.468 "data_size": 63488 00:17:23.468 } 00:17:23.468 ] 00:17:23.468 }' 00:17:23.468 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.468 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 [2024-12-09 22:58:39.753143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.034 BaseBdev1 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 [ 00:17:24.034 { 00:17:24.034 "name": "BaseBdev1", 00:17:24.034 "aliases": [ 00:17:24.034 "dd5d9e22-442e-40cf-9342-b4695348929c" 00:17:24.034 ], 00:17:24.034 "product_name": "Malloc disk", 00:17:24.034 "block_size": 512, 00:17:24.034 "num_blocks": 65536, 00:17:24.034 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:24.034 "assigned_rate_limits": { 00:17:24.034 "rw_ios_per_sec": 0, 00:17:24.034 "rw_mbytes_per_sec": 0, 00:17:24.034 "r_mbytes_per_sec": 0, 00:17:24.034 "w_mbytes_per_sec": 0 00:17:24.034 }, 00:17:24.034 "claimed": true, 00:17:24.034 "claim_type": "exclusive_write", 00:17:24.034 "zoned": false, 00:17:24.034 "supported_io_types": { 00:17:24.034 "read": true, 00:17:24.034 "write": true, 00:17:24.034 "unmap": true, 00:17:24.034 "flush": true, 00:17:24.034 "reset": true, 00:17:24.034 "nvme_admin": false, 00:17:24.034 "nvme_io": false, 00:17:24.034 "nvme_io_md": false, 00:17:24.034 "write_zeroes": true, 00:17:24.034 "zcopy": true, 00:17:24.034 "get_zone_info": false, 00:17:24.034 "zone_management": false, 00:17:24.034 "zone_append": false, 00:17:24.034 "compare": false, 00:17:24.034 "compare_and_write": false, 00:17:24.034 "abort": true, 00:17:24.034 "seek_hole": false, 00:17:24.034 "seek_data": false, 00:17:24.034 "copy": true, 00:17:24.034 "nvme_iov_md": false 00:17:24.034 }, 00:17:24.034 "memory_domains": [ 00:17:24.034 { 00:17:24.034 "dma_device_id": "system", 00:17:24.034 "dma_device_type": 1 00:17:24.034 }, 00:17:24.034 { 00:17:24.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.034 "dma_device_type": 2 00:17:24.034 } 00:17:24.034 ], 00:17:24.034 "driver_specific": {} 00:17:24.034 } 00:17:24.034 ] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.034 "name": "Existed_Raid", 00:17:24.034 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:24.034 "strip_size_kb": 64, 00:17:24.034 "state": "configuring", 00:17:24.034 "raid_level": "raid0", 00:17:24.034 "superblock": true, 00:17:24.034 "num_base_bdevs": 4, 00:17:24.034 "num_base_bdevs_discovered": 3, 00:17:24.034 "num_base_bdevs_operational": 4, 00:17:24.034 "base_bdevs_list": [ 00:17:24.034 { 00:17:24.034 "name": "BaseBdev1", 00:17:24.034 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:24.034 "is_configured": true, 00:17:24.034 "data_offset": 2048, 00:17:24.034 "data_size": 63488 00:17:24.034 }, 00:17:24.034 { 00:17:24.034 "name": null, 00:17:24.034 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:24.034 "is_configured": false, 00:17:24.034 "data_offset": 0, 00:17:24.034 "data_size": 63488 00:17:24.034 }, 00:17:24.034 { 00:17:24.034 "name": "BaseBdev3", 00:17:24.034 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:24.034 "is_configured": true, 00:17:24.034 "data_offset": 2048, 00:17:24.034 "data_size": 63488 00:17:24.034 }, 00:17:24.034 { 00:17:24.034 "name": "BaseBdev4", 00:17:24.034 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:24.034 "is_configured": true, 00:17:24.034 "data_offset": 2048, 00:17:24.034 "data_size": 63488 00:17:24.034 } 00:17:24.034 ] 00:17:24.034 }' 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.034 22:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.598 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.598 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:24.598 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.598 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.598 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.598 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.599 [2024-12-09 22:58:40.332324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.599 "name": "Existed_Raid", 00:17:24.599 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:24.599 "strip_size_kb": 64, 00:17:24.599 "state": "configuring", 00:17:24.599 "raid_level": "raid0", 00:17:24.599 "superblock": true, 00:17:24.599 "num_base_bdevs": 4, 00:17:24.599 "num_base_bdevs_discovered": 2, 00:17:24.599 "num_base_bdevs_operational": 4, 00:17:24.599 "base_bdevs_list": [ 00:17:24.599 { 00:17:24.599 "name": "BaseBdev1", 00:17:24.599 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:24.599 "is_configured": true, 00:17:24.599 "data_offset": 2048, 00:17:24.599 "data_size": 63488 00:17:24.599 }, 00:17:24.599 { 00:17:24.599 "name": null, 00:17:24.599 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:24.599 "is_configured": false, 00:17:24.599 "data_offset": 0, 00:17:24.599 "data_size": 63488 00:17:24.599 }, 00:17:24.599 { 00:17:24.599 "name": null, 00:17:24.599 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:24.599 "is_configured": false, 00:17:24.599 "data_offset": 0, 00:17:24.599 "data_size": 63488 00:17:24.599 }, 00:17:24.599 { 00:17:24.599 "name": "BaseBdev4", 00:17:24.599 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:24.599 "is_configured": true, 00:17:24.599 "data_offset": 2048, 00:17:24.599 "data_size": 63488 00:17:24.599 } 00:17:24.599 ] 00:17:24.599 }' 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.599 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.164 [2024-12-09 22:58:40.823507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.164 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.164 "name": "Existed_Raid", 00:17:25.164 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:25.164 "strip_size_kb": 64, 00:17:25.164 "state": "configuring", 00:17:25.164 "raid_level": "raid0", 00:17:25.164 "superblock": true, 00:17:25.164 "num_base_bdevs": 4, 00:17:25.164 "num_base_bdevs_discovered": 3, 00:17:25.164 "num_base_bdevs_operational": 4, 00:17:25.164 "base_bdevs_list": [ 00:17:25.164 { 00:17:25.164 "name": "BaseBdev1", 00:17:25.164 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:25.164 "is_configured": true, 00:17:25.164 "data_offset": 2048, 00:17:25.164 "data_size": 63488 00:17:25.165 }, 00:17:25.165 { 00:17:25.165 "name": null, 00:17:25.165 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:25.165 "is_configured": false, 00:17:25.165 "data_offset": 0, 00:17:25.165 "data_size": 63488 00:17:25.165 }, 00:17:25.165 { 00:17:25.165 "name": "BaseBdev3", 00:17:25.165 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:25.165 "is_configured": true, 00:17:25.165 "data_offset": 2048, 00:17:25.165 "data_size": 63488 00:17:25.165 }, 00:17:25.165 { 00:17:25.165 "name": "BaseBdev4", 00:17:25.165 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:25.165 "is_configured": true, 00:17:25.165 "data_offset": 2048, 00:17:25.165 "data_size": 63488 00:17:25.165 } 00:17:25.165 ] 00:17:25.165 }' 00:17:25.165 22:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.165 22:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.730 [2024-12-09 22:58:41.362620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.730 "name": "Existed_Raid", 00:17:25.730 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:25.730 "strip_size_kb": 64, 00:17:25.730 "state": "configuring", 00:17:25.730 "raid_level": "raid0", 00:17:25.730 "superblock": true, 00:17:25.730 "num_base_bdevs": 4, 00:17:25.730 "num_base_bdevs_discovered": 2, 00:17:25.730 "num_base_bdevs_operational": 4, 00:17:25.730 "base_bdevs_list": [ 00:17:25.730 { 00:17:25.730 "name": null, 00:17:25.730 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:25.730 "is_configured": false, 00:17:25.730 "data_offset": 0, 00:17:25.730 "data_size": 63488 00:17:25.730 }, 00:17:25.730 { 00:17:25.730 "name": null, 00:17:25.730 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:25.730 "is_configured": false, 00:17:25.730 "data_offset": 0, 00:17:25.730 "data_size": 63488 00:17:25.730 }, 00:17:25.730 { 00:17:25.730 "name": "BaseBdev3", 00:17:25.730 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:25.730 "is_configured": true, 00:17:25.730 "data_offset": 2048, 00:17:25.730 "data_size": 63488 00:17:25.730 }, 00:17:25.730 { 00:17:25.730 "name": "BaseBdev4", 00:17:25.730 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:25.730 "is_configured": true, 00:17:25.730 "data_offset": 2048, 00:17:25.730 "data_size": 63488 00:17:25.730 } 00:17:25.730 ] 00:17:25.730 }' 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.730 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.296 [2024-12-09 22:58:41.941201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.296 "name": "Existed_Raid", 00:17:26.296 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:26.296 "strip_size_kb": 64, 00:17:26.296 "state": "configuring", 00:17:26.296 "raid_level": "raid0", 00:17:26.296 "superblock": true, 00:17:26.296 "num_base_bdevs": 4, 00:17:26.296 "num_base_bdevs_discovered": 3, 00:17:26.296 "num_base_bdevs_operational": 4, 00:17:26.296 "base_bdevs_list": [ 00:17:26.296 { 00:17:26.296 "name": null, 00:17:26.296 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:26.296 "is_configured": false, 00:17:26.296 "data_offset": 0, 00:17:26.296 "data_size": 63488 00:17:26.296 }, 00:17:26.296 { 00:17:26.296 "name": "BaseBdev2", 00:17:26.296 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:26.296 "is_configured": true, 00:17:26.296 "data_offset": 2048, 00:17:26.296 "data_size": 63488 00:17:26.296 }, 00:17:26.296 { 00:17:26.296 "name": "BaseBdev3", 00:17:26.296 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:26.296 "is_configured": true, 00:17:26.296 "data_offset": 2048, 00:17:26.296 "data_size": 63488 00:17:26.296 }, 00:17:26.296 { 00:17:26.296 "name": "BaseBdev4", 00:17:26.296 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:26.296 "is_configured": true, 00:17:26.296 "data_offset": 2048, 00:17:26.296 "data_size": 63488 00:17:26.296 } 00:17:26.296 ] 00:17:26.296 }' 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.296 22:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.554 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.554 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:26.554 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.554 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.554 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.841 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:26.841 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.841 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:26.841 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.841 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.841 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dd5d9e22-442e-40cf-9342-b4695348929c 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.842 [2024-12-09 22:58:42.526781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:26.842 [2024-12-09 22:58:42.527072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:26.842 [2024-12-09 22:58:42.527087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:26.842 [2024-12-09 22:58:42.527398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:26.842 [2024-12-09 22:58:42.527595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:26.842 [2024-12-09 22:58:42.527610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:26.842 NewBaseBdev 00:17:26.842 [2024-12-09 22:58:42.527778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.842 [ 00:17:26.842 { 00:17:26.842 "name": "NewBaseBdev", 00:17:26.842 "aliases": [ 00:17:26.842 "dd5d9e22-442e-40cf-9342-b4695348929c" 00:17:26.842 ], 00:17:26.842 "product_name": "Malloc disk", 00:17:26.842 "block_size": 512, 00:17:26.842 "num_blocks": 65536, 00:17:26.842 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:26.842 "assigned_rate_limits": { 00:17:26.842 "rw_ios_per_sec": 0, 00:17:26.842 "rw_mbytes_per_sec": 0, 00:17:26.842 "r_mbytes_per_sec": 0, 00:17:26.842 "w_mbytes_per_sec": 0 00:17:26.842 }, 00:17:26.842 "claimed": true, 00:17:26.842 "claim_type": "exclusive_write", 00:17:26.842 "zoned": false, 00:17:26.842 "supported_io_types": { 00:17:26.842 "read": true, 00:17:26.842 "write": true, 00:17:26.842 "unmap": true, 00:17:26.842 "flush": true, 00:17:26.842 "reset": true, 00:17:26.842 "nvme_admin": false, 00:17:26.842 "nvme_io": false, 00:17:26.842 "nvme_io_md": false, 00:17:26.842 "write_zeroes": true, 00:17:26.842 "zcopy": true, 00:17:26.842 "get_zone_info": false, 00:17:26.842 "zone_management": false, 00:17:26.842 "zone_append": false, 00:17:26.842 "compare": false, 00:17:26.842 "compare_and_write": false, 00:17:26.842 "abort": true, 00:17:26.842 "seek_hole": false, 00:17:26.842 "seek_data": false, 00:17:26.842 "copy": true, 00:17:26.842 "nvme_iov_md": false 00:17:26.842 }, 00:17:26.842 "memory_domains": [ 00:17:26.842 { 00:17:26.842 "dma_device_id": "system", 00:17:26.842 "dma_device_type": 1 00:17:26.842 }, 00:17:26.842 { 00:17:26.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.842 "dma_device_type": 2 00:17:26.842 } 00:17:26.842 ], 00:17:26.842 "driver_specific": {} 00:17:26.842 } 00:17:26.842 ] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.842 "name": "Existed_Raid", 00:17:26.842 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:26.842 "strip_size_kb": 64, 00:17:26.842 "state": "online", 00:17:26.842 "raid_level": "raid0", 00:17:26.842 "superblock": true, 00:17:26.842 "num_base_bdevs": 4, 00:17:26.842 "num_base_bdevs_discovered": 4, 00:17:26.842 "num_base_bdevs_operational": 4, 00:17:26.842 "base_bdevs_list": [ 00:17:26.842 { 00:17:26.842 "name": "NewBaseBdev", 00:17:26.842 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:26.842 "is_configured": true, 00:17:26.842 "data_offset": 2048, 00:17:26.842 "data_size": 63488 00:17:26.842 }, 00:17:26.842 { 00:17:26.842 "name": "BaseBdev2", 00:17:26.842 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:26.842 "is_configured": true, 00:17:26.842 "data_offset": 2048, 00:17:26.842 "data_size": 63488 00:17:26.842 }, 00:17:26.842 { 00:17:26.842 "name": "BaseBdev3", 00:17:26.842 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:26.842 "is_configured": true, 00:17:26.842 "data_offset": 2048, 00:17:26.842 "data_size": 63488 00:17:26.842 }, 00:17:26.842 { 00:17:26.842 "name": "BaseBdev4", 00:17:26.842 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:26.842 "is_configured": true, 00:17:26.842 "data_offset": 2048, 00:17:26.842 "data_size": 63488 00:17:26.842 } 00:17:26.842 ] 00:17:26.842 }' 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.842 22:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:27.417 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.418 [2024-12-09 22:58:43.042441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:27.418 "name": "Existed_Raid", 00:17:27.418 "aliases": [ 00:17:27.418 "320a69da-e826-4c12-83af-a5b27d27dd0f" 00:17:27.418 ], 00:17:27.418 "product_name": "Raid Volume", 00:17:27.418 "block_size": 512, 00:17:27.418 "num_blocks": 253952, 00:17:27.418 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:27.418 "assigned_rate_limits": { 00:17:27.418 "rw_ios_per_sec": 0, 00:17:27.418 "rw_mbytes_per_sec": 0, 00:17:27.418 "r_mbytes_per_sec": 0, 00:17:27.418 "w_mbytes_per_sec": 0 00:17:27.418 }, 00:17:27.418 "claimed": false, 00:17:27.418 "zoned": false, 00:17:27.418 "supported_io_types": { 00:17:27.418 "read": true, 00:17:27.418 "write": true, 00:17:27.418 "unmap": true, 00:17:27.418 "flush": true, 00:17:27.418 "reset": true, 00:17:27.418 "nvme_admin": false, 00:17:27.418 "nvme_io": false, 00:17:27.418 "nvme_io_md": false, 00:17:27.418 "write_zeroes": true, 00:17:27.418 "zcopy": false, 00:17:27.418 "get_zone_info": false, 00:17:27.418 "zone_management": false, 00:17:27.418 "zone_append": false, 00:17:27.418 "compare": false, 00:17:27.418 "compare_and_write": false, 00:17:27.418 "abort": false, 00:17:27.418 "seek_hole": false, 00:17:27.418 "seek_data": false, 00:17:27.418 "copy": false, 00:17:27.418 "nvme_iov_md": false 00:17:27.418 }, 00:17:27.418 "memory_domains": [ 00:17:27.418 { 00:17:27.418 "dma_device_id": "system", 00:17:27.418 "dma_device_type": 1 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.418 "dma_device_type": 2 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "system", 00:17:27.418 "dma_device_type": 1 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.418 "dma_device_type": 2 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "system", 00:17:27.418 "dma_device_type": 1 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.418 "dma_device_type": 2 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "system", 00:17:27.418 "dma_device_type": 1 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.418 "dma_device_type": 2 00:17:27.418 } 00:17:27.418 ], 00:17:27.418 "driver_specific": { 00:17:27.418 "raid": { 00:17:27.418 "uuid": "320a69da-e826-4c12-83af-a5b27d27dd0f", 00:17:27.418 "strip_size_kb": 64, 00:17:27.418 "state": "online", 00:17:27.418 "raid_level": "raid0", 00:17:27.418 "superblock": true, 00:17:27.418 "num_base_bdevs": 4, 00:17:27.418 "num_base_bdevs_discovered": 4, 00:17:27.418 "num_base_bdevs_operational": 4, 00:17:27.418 "base_bdevs_list": [ 00:17:27.418 { 00:17:27.418 "name": "NewBaseBdev", 00:17:27.418 "uuid": "dd5d9e22-442e-40cf-9342-b4695348929c", 00:17:27.418 "is_configured": true, 00:17:27.418 "data_offset": 2048, 00:17:27.418 "data_size": 63488 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "name": "BaseBdev2", 00:17:27.418 "uuid": "def589b2-a638-48f9-b156-d6038bb2f929", 00:17:27.418 "is_configured": true, 00:17:27.418 "data_offset": 2048, 00:17:27.418 "data_size": 63488 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "name": "BaseBdev3", 00:17:27.418 "uuid": "0e0f30f5-12b1-474b-b323-a615b69e754e", 00:17:27.418 "is_configured": true, 00:17:27.418 "data_offset": 2048, 00:17:27.418 "data_size": 63488 00:17:27.418 }, 00:17:27.418 { 00:17:27.418 "name": "BaseBdev4", 00:17:27.418 "uuid": "532ca28c-7822-4084-a3ca-877b4bb2c45a", 00:17:27.418 "is_configured": true, 00:17:27.418 "data_offset": 2048, 00:17:27.418 "data_size": 63488 00:17:27.418 } 00:17:27.418 ] 00:17:27.418 } 00:17:27.418 } 00:17:27.418 }' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:27.418 BaseBdev2 00:17:27.418 BaseBdev3 00:17:27.418 BaseBdev4' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.418 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.676 [2024-12-09 22:58:43.401563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.676 [2024-12-09 22:58:43.401688] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.676 [2024-12-09 22:58:43.401818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.676 [2024-12-09 22:58:43.401941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.676 [2024-12-09 22:58:43.401956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70617 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70617 ']' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70617 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70617 00:17:27.676 killing process with pid 70617 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70617' 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70617 00:17:27.676 22:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70617 00:17:27.676 [2024-12-09 22:58:43.434422] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.242 [2024-12-09 22:58:43.920318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.616 22:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:29.616 00:17:29.616 real 0m12.152s 00:17:29.616 user 0m19.187s 00:17:29.616 sys 0m1.749s 00:17:29.616 ************************************ 00:17:29.616 END TEST raid_state_function_test_sb 00:17:29.616 ************************************ 00:17:29.616 22:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.616 22:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.616 22:58:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:29.616 22:58:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:29.616 22:58:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.616 22:58:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.616 ************************************ 00:17:29.616 START TEST raid_superblock_test 00:17:29.616 ************************************ 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71293 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71293 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71293 ']' 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.616 22:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.616 [2024-12-09 22:58:45.471428] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:29.616 [2024-12-09 22:58:45.471669] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:17:29.875 [2024-12-09 22:58:45.653425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.132 [2024-12-09 22:58:45.782914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.390 [2024-12-09 22:58:46.006762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.390 [2024-12-09 22:58:46.006830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.647 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.906 malloc1 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.906 [2024-12-09 22:58:46.514611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.906 [2024-12-09 22:58:46.514741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.906 [2024-12-09 22:58:46.514772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:30.906 [2024-12-09 22:58:46.514784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.906 [2024-12-09 22:58:46.517279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.906 [2024-12-09 22:58:46.517321] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.906 pt1 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.906 malloc2 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.906 [2024-12-09 22:58:46.573609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.906 [2024-12-09 22:58:46.573750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.906 [2024-12-09 22:58:46.573786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.906 [2024-12-09 22:58:46.573799] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.906 [2024-12-09 22:58:46.576326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.906 [2024-12-09 22:58:46.576373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.906 pt2 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.906 malloc3 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.906 [2024-12-09 22:58:46.646262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:30.906 [2024-12-09 22:58:46.646326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.906 [2024-12-09 22:58:46.646351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:30.906 [2024-12-09 22:58:46.646362] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.906 [2024-12-09 22:58:46.648804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.906 [2024-12-09 22:58:46.648923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:30.906 pt3 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:30.906 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.907 malloc4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.907 [2024-12-09 22:58:46.709113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:30.907 [2024-12-09 22:58:46.709241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.907 [2024-12-09 22:58:46.709298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:30.907 [2024-12-09 22:58:46.709391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.907 [2024-12-09 22:58:46.711895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.907 [2024-12-09 22:58:46.711976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:30.907 pt4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.907 [2024-12-09 22:58:46.717133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:30.907 [2024-12-09 22:58:46.719283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.907 [2024-12-09 22:58:46.719431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:30.907 [2024-12-09 22:58:46.719554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:30.907 [2024-12-09 22:58:46.719804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:30.907 [2024-12-09 22:58:46.719855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:30.907 [2024-12-09 22:58:46.720191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:30.907 [2024-12-09 22:58:46.720442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:30.907 [2024-12-09 22:58:46.720524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:30.907 [2024-12-09 22:58:46.720760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.907 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.165 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.165 "name": "raid_bdev1", 00:17:31.165 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:31.165 "strip_size_kb": 64, 00:17:31.165 "state": "online", 00:17:31.165 "raid_level": "raid0", 00:17:31.165 "superblock": true, 00:17:31.165 "num_base_bdevs": 4, 00:17:31.165 "num_base_bdevs_discovered": 4, 00:17:31.165 "num_base_bdevs_operational": 4, 00:17:31.165 "base_bdevs_list": [ 00:17:31.165 { 00:17:31.165 "name": "pt1", 00:17:31.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.165 "is_configured": true, 00:17:31.165 "data_offset": 2048, 00:17:31.165 "data_size": 63488 00:17:31.165 }, 00:17:31.165 { 00:17:31.165 "name": "pt2", 00:17:31.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.165 "is_configured": true, 00:17:31.165 "data_offset": 2048, 00:17:31.165 "data_size": 63488 00:17:31.165 }, 00:17:31.165 { 00:17:31.165 "name": "pt3", 00:17:31.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:31.165 "is_configured": true, 00:17:31.165 "data_offset": 2048, 00:17:31.165 "data_size": 63488 00:17:31.165 }, 00:17:31.165 { 00:17:31.165 "name": "pt4", 00:17:31.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:31.165 "is_configured": true, 00:17:31.165 "data_offset": 2048, 00:17:31.165 "data_size": 63488 00:17:31.165 } 00:17:31.165 ] 00:17:31.165 }' 00:17:31.165 22:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.165 22:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.423 [2024-12-09 22:58:47.208983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.423 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:31.423 "name": "raid_bdev1", 00:17:31.423 "aliases": [ 00:17:31.423 "cf549e8a-5011-4dff-b52b-cec6febf7137" 00:17:31.423 ], 00:17:31.423 "product_name": "Raid Volume", 00:17:31.423 "block_size": 512, 00:17:31.423 "num_blocks": 253952, 00:17:31.423 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:31.423 "assigned_rate_limits": { 00:17:31.423 "rw_ios_per_sec": 0, 00:17:31.423 "rw_mbytes_per_sec": 0, 00:17:31.423 "r_mbytes_per_sec": 0, 00:17:31.423 "w_mbytes_per_sec": 0 00:17:31.423 }, 00:17:31.423 "claimed": false, 00:17:31.423 "zoned": false, 00:17:31.423 "supported_io_types": { 00:17:31.423 "read": true, 00:17:31.423 "write": true, 00:17:31.423 "unmap": true, 00:17:31.423 "flush": true, 00:17:31.423 "reset": true, 00:17:31.423 "nvme_admin": false, 00:17:31.423 "nvme_io": false, 00:17:31.423 "nvme_io_md": false, 00:17:31.423 "write_zeroes": true, 00:17:31.423 "zcopy": false, 00:17:31.423 "get_zone_info": false, 00:17:31.423 "zone_management": false, 00:17:31.423 "zone_append": false, 00:17:31.423 "compare": false, 00:17:31.423 "compare_and_write": false, 00:17:31.423 "abort": false, 00:17:31.423 "seek_hole": false, 00:17:31.423 "seek_data": false, 00:17:31.423 "copy": false, 00:17:31.423 "nvme_iov_md": false 00:17:31.423 }, 00:17:31.423 "memory_domains": [ 00:17:31.423 { 00:17:31.423 "dma_device_id": "system", 00:17:31.423 "dma_device_type": 1 00:17:31.423 }, 00:17:31.423 { 00:17:31.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.423 "dma_device_type": 2 00:17:31.423 }, 00:17:31.423 { 00:17:31.423 "dma_device_id": "system", 00:17:31.423 "dma_device_type": 1 00:17:31.423 }, 00:17:31.423 { 00:17:31.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.424 "dma_device_type": 2 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "dma_device_id": "system", 00:17:31.424 "dma_device_type": 1 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.424 "dma_device_type": 2 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "dma_device_id": "system", 00:17:31.424 "dma_device_type": 1 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.424 "dma_device_type": 2 00:17:31.424 } 00:17:31.424 ], 00:17:31.424 "driver_specific": { 00:17:31.424 "raid": { 00:17:31.424 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:31.424 "strip_size_kb": 64, 00:17:31.424 "state": "online", 00:17:31.424 "raid_level": "raid0", 00:17:31.424 "superblock": true, 00:17:31.424 "num_base_bdevs": 4, 00:17:31.424 "num_base_bdevs_discovered": 4, 00:17:31.424 "num_base_bdevs_operational": 4, 00:17:31.424 "base_bdevs_list": [ 00:17:31.424 { 00:17:31.424 "name": "pt1", 00:17:31.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.424 "is_configured": true, 00:17:31.424 "data_offset": 2048, 00:17:31.424 "data_size": 63488 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "name": "pt2", 00:17:31.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.424 "is_configured": true, 00:17:31.424 "data_offset": 2048, 00:17:31.424 "data_size": 63488 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "name": "pt3", 00:17:31.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:31.424 "is_configured": true, 00:17:31.424 "data_offset": 2048, 00:17:31.424 "data_size": 63488 00:17:31.424 }, 00:17:31.424 { 00:17:31.424 "name": "pt4", 00:17:31.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:31.424 "is_configured": true, 00:17:31.424 "data_offset": 2048, 00:17:31.424 "data_size": 63488 00:17:31.424 } 00:17:31.424 ] 00:17:31.424 } 00:17:31.424 } 00:17:31.424 }' 00:17:31.424 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:31.682 pt2 00:17:31.682 pt3 00:17:31.682 pt4' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.682 [2024-12-09 22:58:47.504969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.682 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cf549e8a-5011-4dff-b52b-cec6febf7137 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cf549e8a-5011-4dff-b52b-cec6febf7137 ']' 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.941 [2024-12-09 22:58:47.548615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.941 [2024-12-09 22:58:47.548650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.941 [2024-12-09 22:58:47.548749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.941 [2024-12-09 22:58:47.548830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.941 [2024-12-09 22:58:47.548847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:31.941 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 [2024-12-09 22:58:47.696536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:31.942 [2024-12-09 22:58:47.698676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:31.942 [2024-12-09 22:58:47.698793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:31.942 [2024-12-09 22:58:47.698841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:31.942 [2024-12-09 22:58:47.698903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:31.942 [2024-12-09 22:58:47.698960] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:31.942 [2024-12-09 22:58:47.698983] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:31.942 [2024-12-09 22:58:47.699005] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:31.942 [2024-12-09 22:58:47.699021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.942 [2024-12-09 22:58:47.699037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:31.942 request: 00:17:31.942 { 00:17:31.942 "name": "raid_bdev1", 00:17:31.942 "raid_level": "raid0", 00:17:31.942 "base_bdevs": [ 00:17:31.942 "malloc1", 00:17:31.942 "malloc2", 00:17:31.942 "malloc3", 00:17:31.942 "malloc4" 00:17:31.942 ], 00:17:31.942 "strip_size_kb": 64, 00:17:31.942 "superblock": false, 00:17:31.942 "method": "bdev_raid_create", 00:17:31.942 "req_id": 1 00:17:31.942 } 00:17:31.942 Got JSON-RPC error response 00:17:31.942 response: 00:17:31.942 { 00:17:31.942 "code": -17, 00:17:31.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:31.942 } 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 [2024-12-09 22:58:47.776334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.942 [2024-12-09 22:58:47.776477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.942 [2024-12-09 22:58:47.776533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:31.942 [2024-12-09 22:58:47.776576] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.942 [2024-12-09 22:58:47.779115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.942 [2024-12-09 22:58:47.779200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.942 [2024-12-09 22:58:47.779336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:31.942 [2024-12-09 22:58:47.779433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.942 pt1 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.942 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.200 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.200 "name": "raid_bdev1", 00:17:32.200 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:32.200 "strip_size_kb": 64, 00:17:32.200 "state": "configuring", 00:17:32.200 "raid_level": "raid0", 00:17:32.200 "superblock": true, 00:17:32.200 "num_base_bdevs": 4, 00:17:32.200 "num_base_bdevs_discovered": 1, 00:17:32.200 "num_base_bdevs_operational": 4, 00:17:32.200 "base_bdevs_list": [ 00:17:32.200 { 00:17:32.200 "name": "pt1", 00:17:32.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.200 "is_configured": true, 00:17:32.200 "data_offset": 2048, 00:17:32.200 "data_size": 63488 00:17:32.200 }, 00:17:32.200 { 00:17:32.200 "name": null, 00:17:32.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.200 "is_configured": false, 00:17:32.200 "data_offset": 2048, 00:17:32.200 "data_size": 63488 00:17:32.200 }, 00:17:32.200 { 00:17:32.200 "name": null, 00:17:32.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.200 "is_configured": false, 00:17:32.200 "data_offset": 2048, 00:17:32.200 "data_size": 63488 00:17:32.200 }, 00:17:32.200 { 00:17:32.200 "name": null, 00:17:32.200 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:32.200 "is_configured": false, 00:17:32.200 "data_offset": 2048, 00:17:32.200 "data_size": 63488 00:17:32.200 } 00:17:32.200 ] 00:17:32.200 }' 00:17:32.200 22:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.200 22:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.457 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:32.457 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.457 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.457 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 [2024-12-09 22:58:48.231633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.458 [2024-12-09 22:58:48.231775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.458 [2024-12-09 22:58:48.231827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:32.458 [2024-12-09 22:58:48.231864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.458 [2024-12-09 22:58:48.232412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.458 [2024-12-09 22:58:48.232514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.458 [2024-12-09 22:58:48.232656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:32.458 [2024-12-09 22:58:48.232718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.458 pt2 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 [2024-12-09 22:58:48.243600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.458 "name": "raid_bdev1", 00:17:32.458 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:32.458 "strip_size_kb": 64, 00:17:32.458 "state": "configuring", 00:17:32.458 "raid_level": "raid0", 00:17:32.458 "superblock": true, 00:17:32.458 "num_base_bdevs": 4, 00:17:32.458 "num_base_bdevs_discovered": 1, 00:17:32.458 "num_base_bdevs_operational": 4, 00:17:32.458 "base_bdevs_list": [ 00:17:32.458 { 00:17:32.458 "name": "pt1", 00:17:32.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.458 "is_configured": true, 00:17:32.458 "data_offset": 2048, 00:17:32.458 "data_size": 63488 00:17:32.458 }, 00:17:32.458 { 00:17:32.458 "name": null, 00:17:32.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.458 "is_configured": false, 00:17:32.458 "data_offset": 0, 00:17:32.458 "data_size": 63488 00:17:32.458 }, 00:17:32.458 { 00:17:32.458 "name": null, 00:17:32.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.458 "is_configured": false, 00:17:32.458 "data_offset": 2048, 00:17:32.458 "data_size": 63488 00:17:32.458 }, 00:17:32.458 { 00:17:32.458 "name": null, 00:17:32.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:32.458 "is_configured": false, 00:17:32.458 "data_offset": 2048, 00:17:32.458 "data_size": 63488 00:17:32.458 } 00:17:32.458 ] 00:17:32.458 }' 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.458 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.026 [2024-12-09 22:58:48.742771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.026 [2024-12-09 22:58:48.742855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.026 [2024-12-09 22:58:48.742880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:33.026 [2024-12-09 22:58:48.742892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.026 [2024-12-09 22:58:48.743432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.026 [2024-12-09 22:58:48.743453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.026 [2024-12-09 22:58:48.743566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:33.026 [2024-12-09 22:58:48.743592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.026 pt2 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.026 [2024-12-09 22:58:48.754738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:33.026 [2024-12-09 22:58:48.754808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.026 [2024-12-09 22:58:48.754832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:33.026 [2024-12-09 22:58:48.754843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.026 [2024-12-09 22:58:48.755338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.026 [2024-12-09 22:58:48.755357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:33.026 [2024-12-09 22:58:48.755447] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:33.026 [2024-12-09 22:58:48.755491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:33.026 pt3 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.026 [2024-12-09 22:58:48.766709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:33.026 [2024-12-09 22:58:48.766789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.026 [2024-12-09 22:58:48.766814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:33.026 [2024-12-09 22:58:48.766825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.026 [2024-12-09 22:58:48.767374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.026 [2024-12-09 22:58:48.767395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:33.026 [2024-12-09 22:58:48.767521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:33.026 [2024-12-09 22:58:48.767552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:33.026 [2024-12-09 22:58:48.767728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:33.026 [2024-12-09 22:58:48.767738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:33.026 [2024-12-09 22:58:48.768029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:33.026 [2024-12-09 22:58:48.768218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:33.026 [2024-12-09 22:58:48.768235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:33.026 [2024-12-09 22:58:48.768398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.026 pt4 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.026 "name": "raid_bdev1", 00:17:33.026 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:33.026 "strip_size_kb": 64, 00:17:33.026 "state": "online", 00:17:33.026 "raid_level": "raid0", 00:17:33.026 "superblock": true, 00:17:33.026 "num_base_bdevs": 4, 00:17:33.026 "num_base_bdevs_discovered": 4, 00:17:33.026 "num_base_bdevs_operational": 4, 00:17:33.026 "base_bdevs_list": [ 00:17:33.026 { 00:17:33.026 "name": "pt1", 00:17:33.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.026 "is_configured": true, 00:17:33.026 "data_offset": 2048, 00:17:33.026 "data_size": 63488 00:17:33.026 }, 00:17:33.026 { 00:17:33.026 "name": "pt2", 00:17:33.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.026 "is_configured": true, 00:17:33.026 "data_offset": 2048, 00:17:33.026 "data_size": 63488 00:17:33.026 }, 00:17:33.026 { 00:17:33.026 "name": "pt3", 00:17:33.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.026 "is_configured": true, 00:17:33.026 "data_offset": 2048, 00:17:33.026 "data_size": 63488 00:17:33.026 }, 00:17:33.026 { 00:17:33.026 "name": "pt4", 00:17:33.026 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.026 "is_configured": true, 00:17:33.026 "data_offset": 2048, 00:17:33.026 "data_size": 63488 00:17:33.026 } 00:17:33.026 ] 00:17:33.026 }' 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.026 22:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 [2024-12-09 22:58:49.162443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.594 "name": "raid_bdev1", 00:17:33.594 "aliases": [ 00:17:33.594 "cf549e8a-5011-4dff-b52b-cec6febf7137" 00:17:33.594 ], 00:17:33.594 "product_name": "Raid Volume", 00:17:33.594 "block_size": 512, 00:17:33.594 "num_blocks": 253952, 00:17:33.594 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:33.594 "assigned_rate_limits": { 00:17:33.594 "rw_ios_per_sec": 0, 00:17:33.594 "rw_mbytes_per_sec": 0, 00:17:33.594 "r_mbytes_per_sec": 0, 00:17:33.594 "w_mbytes_per_sec": 0 00:17:33.594 }, 00:17:33.594 "claimed": false, 00:17:33.594 "zoned": false, 00:17:33.594 "supported_io_types": { 00:17:33.594 "read": true, 00:17:33.594 "write": true, 00:17:33.594 "unmap": true, 00:17:33.594 "flush": true, 00:17:33.594 "reset": true, 00:17:33.594 "nvme_admin": false, 00:17:33.594 "nvme_io": false, 00:17:33.594 "nvme_io_md": false, 00:17:33.594 "write_zeroes": true, 00:17:33.594 "zcopy": false, 00:17:33.594 "get_zone_info": false, 00:17:33.594 "zone_management": false, 00:17:33.594 "zone_append": false, 00:17:33.594 "compare": false, 00:17:33.594 "compare_and_write": false, 00:17:33.594 "abort": false, 00:17:33.594 "seek_hole": false, 00:17:33.594 "seek_data": false, 00:17:33.594 "copy": false, 00:17:33.594 "nvme_iov_md": false 00:17:33.594 }, 00:17:33.594 "memory_domains": [ 00:17:33.594 { 00:17:33.594 "dma_device_id": "system", 00:17:33.594 "dma_device_type": 1 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.594 "dma_device_type": 2 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "system", 00:17:33.594 "dma_device_type": 1 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.594 "dma_device_type": 2 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "system", 00:17:33.594 "dma_device_type": 1 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.594 "dma_device_type": 2 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "system", 00:17:33.594 "dma_device_type": 1 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.594 "dma_device_type": 2 00:17:33.594 } 00:17:33.594 ], 00:17:33.594 "driver_specific": { 00:17:33.594 "raid": { 00:17:33.594 "uuid": "cf549e8a-5011-4dff-b52b-cec6febf7137", 00:17:33.594 "strip_size_kb": 64, 00:17:33.594 "state": "online", 00:17:33.594 "raid_level": "raid0", 00:17:33.594 "superblock": true, 00:17:33.594 "num_base_bdevs": 4, 00:17:33.594 "num_base_bdevs_discovered": 4, 00:17:33.594 "num_base_bdevs_operational": 4, 00:17:33.594 "base_bdevs_list": [ 00:17:33.594 { 00:17:33.594 "name": "pt1", 00:17:33.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.594 "is_configured": true, 00:17:33.594 "data_offset": 2048, 00:17:33.594 "data_size": 63488 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "name": "pt2", 00:17:33.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.594 "is_configured": true, 00:17:33.594 "data_offset": 2048, 00:17:33.594 "data_size": 63488 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "name": "pt3", 00:17:33.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.594 "is_configured": true, 00:17:33.594 "data_offset": 2048, 00:17:33.594 "data_size": 63488 00:17:33.594 }, 00:17:33.594 { 00:17:33.594 "name": "pt4", 00:17:33.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.594 "is_configured": true, 00:17:33.594 "data_offset": 2048, 00:17:33.594 "data_size": 63488 00:17:33.594 } 00:17:33.594 ] 00:17:33.594 } 00:17:33.594 } 00:17:33.594 }' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:33.594 pt2 00:17:33.594 pt3 00:17:33.594 pt4' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:33.853 [2024-12-09 22:58:49.497995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cf549e8a-5011-4dff-b52b-cec6febf7137 '!=' cf549e8a-5011-4dff-b52b-cec6febf7137 ']' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71293 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71293 ']' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71293 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71293 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71293' 00:17:33.853 killing process with pid 71293 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71293 00:17:33.853 [2024-12-09 22:58:49.568920] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.853 [2024-12-09 22:58:49.569085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.853 22:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71293 00:17:33.853 [2024-12-09 22:58:49.569211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.853 [2024-12-09 22:58:49.569266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:34.419 [2024-12-09 22:58:50.056964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.797 22:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:35.797 00:17:35.797 real 0m6.009s 00:17:35.797 user 0m8.578s 00:17:35.797 sys 0m0.847s 00:17:35.797 22:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.797 22:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.797 ************************************ 00:17:35.797 END TEST raid_superblock_test 00:17:35.797 ************************************ 00:17:35.797 22:58:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:17:35.797 22:58:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:35.797 22:58:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.797 22:58:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.797 ************************************ 00:17:35.797 START TEST raid_read_error_test 00:17:35.797 ************************************ 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j3ZbvzFmfS 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71561 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71561 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71561 ']' 00:17:35.797 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.798 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.798 22:58:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:35.798 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.798 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.798 22:58:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.798 [2024-12-09 22:58:51.549825] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:35.798 [2024-12-09 22:58:51.550748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71561 ] 00:17:36.055 [2024-12-09 22:58:51.733749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.055 [2024-12-09 22:58:51.871972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.314 [2024-12-09 22:58:52.112392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.314 [2024-12-09 22:58:52.112465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 BaseBdev1_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 true 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 [2024-12-09 22:58:52.566844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:36.882 [2024-12-09 22:58:52.566909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.882 [2024-12-09 22:58:52.566940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:36.882 [2024-12-09 22:58:52.566954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.882 [2024-12-09 22:58:52.569422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.882 [2024-12-09 22:58:52.569479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.882 BaseBdev1 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 BaseBdev2_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 true 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 [2024-12-09 22:58:52.630230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:36.882 [2024-12-09 22:58:52.630291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.882 [2024-12-09 22:58:52.630311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:36.882 [2024-12-09 22:58:52.630323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.882 [2024-12-09 22:58:52.632764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.882 [2024-12-09 22:58:52.632807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.882 BaseBdev2 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 BaseBdev3_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 true 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.882 [2024-12-09 22:58:52.703731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:36.882 [2024-12-09 22:58:52.703791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.882 [2024-12-09 22:58:52.703813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:36.882 [2024-12-09 22:58:52.703834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.882 [2024-12-09 22:58:52.706288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.882 [2024-12-09 22:58:52.706332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:36.882 BaseBdev3 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.882 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.142 BaseBdev4_malloc 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.142 true 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.142 [2024-12-09 22:58:52.762147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:37.142 [2024-12-09 22:58:52.762217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.142 [2024-12-09 22:58:52.762241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:37.142 [2024-12-09 22:58:52.762259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.142 [2024-12-09 22:58:52.764801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.142 [2024-12-09 22:58:52.764852] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:37.142 BaseBdev4 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.142 [2024-12-09 22:58:52.770213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.142 [2024-12-09 22:58:52.772399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.142 [2024-12-09 22:58:52.772521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.142 [2024-12-09 22:58:52.772609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:37.142 [2024-12-09 22:58:52.772927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:37.142 [2024-12-09 22:58:52.772962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:37.142 [2024-12-09 22:58:52.773282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:37.142 [2024-12-09 22:58:52.773539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:37.142 [2024-12-09 22:58:52.773563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:37.142 [2024-12-09 22:58:52.773787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.142 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.143 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.143 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.143 "name": "raid_bdev1", 00:17:37.143 "uuid": "77f3d200-c62d-4952-8044-ea13ca2bc2dc", 00:17:37.143 "strip_size_kb": 64, 00:17:37.143 "state": "online", 00:17:37.143 "raid_level": "raid0", 00:17:37.143 "superblock": true, 00:17:37.143 "num_base_bdevs": 4, 00:17:37.143 "num_base_bdevs_discovered": 4, 00:17:37.143 "num_base_bdevs_operational": 4, 00:17:37.143 "base_bdevs_list": [ 00:17:37.143 { 00:17:37.143 "name": "BaseBdev1", 00:17:37.143 "uuid": "2dd38e46-0746-5f1a-b96a-b6da21a70322", 00:17:37.143 "is_configured": true, 00:17:37.143 "data_offset": 2048, 00:17:37.143 "data_size": 63488 00:17:37.143 }, 00:17:37.143 { 00:17:37.143 "name": "BaseBdev2", 00:17:37.143 "uuid": "8bb7ab9d-0fdd-586c-9d3e-be1f3ff78d6e", 00:17:37.143 "is_configured": true, 00:17:37.143 "data_offset": 2048, 00:17:37.143 "data_size": 63488 00:17:37.143 }, 00:17:37.143 { 00:17:37.143 "name": "BaseBdev3", 00:17:37.143 "uuid": "75e40d6a-4e71-5c45-afb0-33ea34393c8f", 00:17:37.143 "is_configured": true, 00:17:37.143 "data_offset": 2048, 00:17:37.143 "data_size": 63488 00:17:37.143 }, 00:17:37.143 { 00:17:37.143 "name": "BaseBdev4", 00:17:37.143 "uuid": "a271d163-954b-5080-a5f6-102a107f6adf", 00:17:37.143 "is_configured": true, 00:17:37.143 "data_offset": 2048, 00:17:37.143 "data_size": 63488 00:17:37.143 } 00:17:37.143 ] 00:17:37.143 }' 00:17:37.143 22:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.143 22:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.402 22:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:37.402 22:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:37.661 [2024-12-09 22:58:53.318949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.600 "name": "raid_bdev1", 00:17:38.600 "uuid": "77f3d200-c62d-4952-8044-ea13ca2bc2dc", 00:17:38.600 "strip_size_kb": 64, 00:17:38.600 "state": "online", 00:17:38.600 "raid_level": "raid0", 00:17:38.600 "superblock": true, 00:17:38.600 "num_base_bdevs": 4, 00:17:38.600 "num_base_bdevs_discovered": 4, 00:17:38.600 "num_base_bdevs_operational": 4, 00:17:38.600 "base_bdevs_list": [ 00:17:38.600 { 00:17:38.600 "name": "BaseBdev1", 00:17:38.600 "uuid": "2dd38e46-0746-5f1a-b96a-b6da21a70322", 00:17:38.600 "is_configured": true, 00:17:38.600 "data_offset": 2048, 00:17:38.600 "data_size": 63488 00:17:38.600 }, 00:17:38.600 { 00:17:38.600 "name": "BaseBdev2", 00:17:38.600 "uuid": "8bb7ab9d-0fdd-586c-9d3e-be1f3ff78d6e", 00:17:38.600 "is_configured": true, 00:17:38.600 "data_offset": 2048, 00:17:38.600 "data_size": 63488 00:17:38.600 }, 00:17:38.600 { 00:17:38.600 "name": "BaseBdev3", 00:17:38.600 "uuid": "75e40d6a-4e71-5c45-afb0-33ea34393c8f", 00:17:38.600 "is_configured": true, 00:17:38.600 "data_offset": 2048, 00:17:38.600 "data_size": 63488 00:17:38.600 }, 00:17:38.600 { 00:17:38.600 "name": "BaseBdev4", 00:17:38.600 "uuid": "a271d163-954b-5080-a5f6-102a107f6adf", 00:17:38.600 "is_configured": true, 00:17:38.600 "data_offset": 2048, 00:17:38.600 "data_size": 63488 00:17:38.600 } 00:17:38.600 ] 00:17:38.600 }' 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.600 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.860 [2024-12-09 22:58:54.672029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.860 [2024-12-09 22:58:54.672073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.860 [2024-12-09 22:58:54.675325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.860 [2024-12-09 22:58:54.675396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.860 [2024-12-09 22:58:54.675446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.860 [2024-12-09 22:58:54.675477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:38.860 { 00:17:38.860 "results": [ 00:17:38.860 { 00:17:38.860 "job": "raid_bdev1", 00:17:38.860 "core_mask": "0x1", 00:17:38.860 "workload": "randrw", 00:17:38.860 "percentage": 50, 00:17:38.860 "status": "finished", 00:17:38.860 "queue_depth": 1, 00:17:38.860 "io_size": 131072, 00:17:38.860 "runtime": 1.353632, 00:17:38.860 "iops": 12838.79222713411, 00:17:38.860 "mibps": 1604.8490283917638, 00:17:38.860 "io_failed": 1, 00:17:38.860 "io_timeout": 0, 00:17:38.860 "avg_latency_us": 107.66786176953885, 00:17:38.860 "min_latency_us": 32.419213973799124, 00:17:38.860 "max_latency_us": 1745.7187772925763 00:17:38.860 } 00:17:38.860 ], 00:17:38.860 "core_count": 1 00:17:38.860 } 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71561 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71561 ']' 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71561 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71561 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.860 killing process with pid 71561 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71561' 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71561 00:17:38.860 22:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71561 00:17:38.860 [2024-12-09 22:58:54.706755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.428 [2024-12-09 22:58:55.108389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j3ZbvzFmfS 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:17:40.808 00:17:40.808 real 0m5.170s 00:17:40.808 user 0m6.120s 00:17:40.808 sys 0m0.574s 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.808 22:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.808 ************************************ 00:17:40.808 END TEST raid_read_error_test 00:17:40.808 ************************************ 00:17:40.808 22:58:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:17:40.808 22:58:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:40.808 22:58:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.808 22:58:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.808 ************************************ 00:17:40.808 START TEST raid_write_error_test 00:17:40.808 ************************************ 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sU7ijpOJzC 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71709 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71709 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71709 ']' 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.808 22:58:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.067 [2024-12-09 22:58:56.761131] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:41.067 [2024-12-09 22:58:56.761323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:17:41.327 [2024-12-09 22:58:56.956827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.327 [2024-12-09 22:58:57.097839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.586 [2024-12-09 22:58:57.353355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.586 [2024-12-09 22:58:57.353432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.844 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.844 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:41.844 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:41.844 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.844 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.844 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 BaseBdev1_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 true 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 [2024-12-09 22:58:57.747905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:42.104 [2024-12-09 22:58:57.747983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.104 [2024-12-09 22:58:57.748009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:42.104 [2024-12-09 22:58:57.748023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.104 [2024-12-09 22:58:57.750606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.104 [2024-12-09 22:58:57.750667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.104 BaseBdev1 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 BaseBdev2_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 true 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 [2024-12-09 22:58:57.824949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:42.104 [2024-12-09 22:58:57.825023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.104 [2024-12-09 22:58:57.825044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:42.104 [2024-12-09 22:58:57.825057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.104 [2024-12-09 22:58:57.827547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.104 [2024-12-09 22:58:57.827593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.104 BaseBdev2 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 BaseBdev3_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 true 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.104 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.104 [2024-12-09 22:58:57.922687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:42.104 [2024-12-09 22:58:57.922763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.104 [2024-12-09 22:58:57.922788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:42.104 [2024-12-09 22:58:57.922801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.104 [2024-12-09 22:58:57.925391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.104 [2024-12-09 22:58:57.925443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:42.104 BaseBdev3 00:17:42.105 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.105 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:42.105 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:42.105 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.105 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 BaseBdev4_malloc 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.364 true 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:42.364 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.365 22:58:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.365 [2024-12-09 22:58:57.997504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:42.365 [2024-12-09 22:58:57.997569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.365 [2024-12-09 22:58:57.997590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:42.365 [2024-12-09 22:58:57.997603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.365 [2024-12-09 22:58:58.000050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.365 [2024-12-09 22:58:58.000100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:42.365 BaseBdev4 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.365 [2024-12-09 22:58:58.009560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.365 [2024-12-09 22:58:58.011671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.365 [2024-12-09 22:58:58.011761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.365 [2024-12-09 22:58:58.011841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.365 [2024-12-09 22:58:58.012110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:42.365 [2024-12-09 22:58:58.012138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:42.365 [2024-12-09 22:58:58.012439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:42.365 [2024-12-09 22:58:58.012653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:42.365 [2024-12-09 22:58:58.012676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:42.365 [2024-12-09 22:58:58.012859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.365 "name": "raid_bdev1", 00:17:42.365 "uuid": "8f09f74c-5899-4ab7-a357-b12a2f37b570", 00:17:42.365 "strip_size_kb": 64, 00:17:42.365 "state": "online", 00:17:42.365 "raid_level": "raid0", 00:17:42.365 "superblock": true, 00:17:42.365 "num_base_bdevs": 4, 00:17:42.365 "num_base_bdevs_discovered": 4, 00:17:42.365 "num_base_bdevs_operational": 4, 00:17:42.365 "base_bdevs_list": [ 00:17:42.365 { 00:17:42.365 "name": "BaseBdev1", 00:17:42.365 "uuid": "3aa4f961-e460-5698-9a97-5030aec7f993", 00:17:42.365 "is_configured": true, 00:17:42.365 "data_offset": 2048, 00:17:42.365 "data_size": 63488 00:17:42.365 }, 00:17:42.365 { 00:17:42.365 "name": "BaseBdev2", 00:17:42.365 "uuid": "8106fd0a-7efe-5a68-a13b-ecfd5124407c", 00:17:42.365 "is_configured": true, 00:17:42.365 "data_offset": 2048, 00:17:42.365 "data_size": 63488 00:17:42.365 }, 00:17:42.365 { 00:17:42.365 "name": "BaseBdev3", 00:17:42.365 "uuid": "4cf4b1f5-6c11-5878-8cbb-768f42287864", 00:17:42.365 "is_configured": true, 00:17:42.365 "data_offset": 2048, 00:17:42.365 "data_size": 63488 00:17:42.365 }, 00:17:42.365 { 00:17:42.365 "name": "BaseBdev4", 00:17:42.365 "uuid": "a5eb56f1-c7a0-5d9a-ab9e-99a12e930e17", 00:17:42.365 "is_configured": true, 00:17:42.365 "data_offset": 2048, 00:17:42.365 "data_size": 63488 00:17:42.365 } 00:17:42.365 ] 00:17:42.365 }' 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.365 22:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.933 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:42.933 22:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:42.933 [2024-12-09 22:58:58.606072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:43.871 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.872 "name": "raid_bdev1", 00:17:43.872 "uuid": "8f09f74c-5899-4ab7-a357-b12a2f37b570", 00:17:43.872 "strip_size_kb": 64, 00:17:43.872 "state": "online", 00:17:43.872 "raid_level": "raid0", 00:17:43.872 "superblock": true, 00:17:43.872 "num_base_bdevs": 4, 00:17:43.872 "num_base_bdevs_discovered": 4, 00:17:43.872 "num_base_bdevs_operational": 4, 00:17:43.872 "base_bdevs_list": [ 00:17:43.872 { 00:17:43.872 "name": "BaseBdev1", 00:17:43.872 "uuid": "3aa4f961-e460-5698-9a97-5030aec7f993", 00:17:43.872 "is_configured": true, 00:17:43.872 "data_offset": 2048, 00:17:43.872 "data_size": 63488 00:17:43.872 }, 00:17:43.872 { 00:17:43.872 "name": "BaseBdev2", 00:17:43.872 "uuid": "8106fd0a-7efe-5a68-a13b-ecfd5124407c", 00:17:43.872 "is_configured": true, 00:17:43.872 "data_offset": 2048, 00:17:43.872 "data_size": 63488 00:17:43.872 }, 00:17:43.872 { 00:17:43.872 "name": "BaseBdev3", 00:17:43.872 "uuid": "4cf4b1f5-6c11-5878-8cbb-768f42287864", 00:17:43.872 "is_configured": true, 00:17:43.872 "data_offset": 2048, 00:17:43.872 "data_size": 63488 00:17:43.872 }, 00:17:43.872 { 00:17:43.872 "name": "BaseBdev4", 00:17:43.872 "uuid": "a5eb56f1-c7a0-5d9a-ab9e-99a12e930e17", 00:17:43.872 "is_configured": true, 00:17:43.872 "data_offset": 2048, 00:17:43.872 "data_size": 63488 00:17:43.872 } 00:17:43.872 ] 00:17:43.872 }' 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.872 22:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.441 22:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.441 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.441 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.441 [2024-12-09 22:59:00.012138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.441 [2024-12-09 22:59:00.012188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.441 [2024-12-09 22:59:00.015468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.441 [2024-12-09 22:59:00.015542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.442 [2024-12-09 22:59:00.015597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.442 [2024-12-09 22:59:00.015618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:44.442 { 00:17:44.442 "results": [ 00:17:44.442 { 00:17:44.442 "job": "raid_bdev1", 00:17:44.442 "core_mask": "0x1", 00:17:44.442 "workload": "randrw", 00:17:44.442 "percentage": 50, 00:17:44.442 "status": "finished", 00:17:44.442 "queue_depth": 1, 00:17:44.442 "io_size": 131072, 00:17:44.442 "runtime": 1.406797, 00:17:44.442 "iops": 12723.228724542347, 00:17:44.442 "mibps": 1590.4035905677933, 00:17:44.442 "io_failed": 1, 00:17:44.442 "io_timeout": 0, 00:17:44.442 "avg_latency_us": 108.62992871605961, 00:17:44.442 "min_latency_us": 33.31353711790393, 00:17:44.442 "max_latency_us": 1738.564192139738 00:17:44.442 } 00:17:44.442 ], 00:17:44.442 "core_count": 1 00:17:44.442 } 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71709 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71709 ']' 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71709 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71709 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.442 killing process with pid 71709 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71709' 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71709 00:17:44.442 [2024-12-09 22:59:00.052219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.442 22:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71709 00:17:44.701 [2024-12-09 22:59:00.447724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sU7ijpOJzC 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:17:46.130 00:17:46.130 real 0m5.241s 00:17:46.130 user 0m6.248s 00:17:46.130 sys 0m0.595s 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.130 22:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.130 ************************************ 00:17:46.130 END TEST raid_write_error_test 00:17:46.130 ************************************ 00:17:46.130 22:59:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:46.130 22:59:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:46.130 22:59:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:46.130 22:59:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.130 22:59:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.130 ************************************ 00:17:46.130 START TEST raid_state_function_test 00:17:46.130 ************************************ 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71860 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71860' 00:17:46.130 Process raid pid: 71860 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71860 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71860 ']' 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.130 22:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.389 [2024-12-09 22:59:02.025052] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:46.389 [2024-12-09 22:59:02.025188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.389 [2024-12-09 22:59:02.195422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.648 [2024-12-09 22:59:02.336155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.907 [2024-12-09 22:59:02.586615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.907 [2024-12-09 22:59:02.586660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.167 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.167 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:47.167 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:47.167 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.167 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.168 [2024-12-09 22:59:02.943971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.168 [2024-12-09 22:59:02.944039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.168 [2024-12-09 22:59:02.944056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.168 [2024-12-09 22:59:02.944069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.168 [2024-12-09 22:59:02.944077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.168 [2024-12-09 22:59:02.944088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.168 [2024-12-09 22:59:02.944095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:47.168 [2024-12-09 22:59:02.944105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.168 "name": "Existed_Raid", 00:17:47.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.168 "strip_size_kb": 64, 00:17:47.168 "state": "configuring", 00:17:47.168 "raid_level": "concat", 00:17:47.168 "superblock": false, 00:17:47.168 "num_base_bdevs": 4, 00:17:47.168 "num_base_bdevs_discovered": 0, 00:17:47.168 "num_base_bdevs_operational": 4, 00:17:47.168 "base_bdevs_list": [ 00:17:47.168 { 00:17:47.168 "name": "BaseBdev1", 00:17:47.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.168 "is_configured": false, 00:17:47.168 "data_offset": 0, 00:17:47.168 "data_size": 0 00:17:47.168 }, 00:17:47.168 { 00:17:47.168 "name": "BaseBdev2", 00:17:47.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.168 "is_configured": false, 00:17:47.168 "data_offset": 0, 00:17:47.168 "data_size": 0 00:17:47.168 }, 00:17:47.168 { 00:17:47.168 "name": "BaseBdev3", 00:17:47.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.168 "is_configured": false, 00:17:47.168 "data_offset": 0, 00:17:47.168 "data_size": 0 00:17:47.168 }, 00:17:47.168 { 00:17:47.168 "name": "BaseBdev4", 00:17:47.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.168 "is_configured": false, 00:17:47.168 "data_offset": 0, 00:17:47.168 "data_size": 0 00:17:47.168 } 00:17:47.168 ] 00:17:47.168 }' 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.168 22:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.736 [2024-12-09 22:59:03.387361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.736 [2024-12-09 22:59:03.387411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.736 [2024-12-09 22:59:03.395365] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.736 [2024-12-09 22:59:03.395421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.736 [2024-12-09 22:59:03.395433] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.736 [2024-12-09 22:59:03.395445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.736 [2024-12-09 22:59:03.395452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.736 [2024-12-09 22:59:03.395481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.736 [2024-12-09 22:59:03.395491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:47.736 [2024-12-09 22:59:03.395501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.736 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.737 [2024-12-09 22:59:03.450120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.737 BaseBdev1 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.737 [ 00:17:47.737 { 00:17:47.737 "name": "BaseBdev1", 00:17:47.737 "aliases": [ 00:17:47.737 "c9e36798-02c2-43b0-8e53-a416a9565a98" 00:17:47.737 ], 00:17:47.737 "product_name": "Malloc disk", 00:17:47.737 "block_size": 512, 00:17:47.737 "num_blocks": 65536, 00:17:47.737 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:47.737 "assigned_rate_limits": { 00:17:47.737 "rw_ios_per_sec": 0, 00:17:47.737 "rw_mbytes_per_sec": 0, 00:17:47.737 "r_mbytes_per_sec": 0, 00:17:47.737 "w_mbytes_per_sec": 0 00:17:47.737 }, 00:17:47.737 "claimed": true, 00:17:47.737 "claim_type": "exclusive_write", 00:17:47.737 "zoned": false, 00:17:47.737 "supported_io_types": { 00:17:47.737 "read": true, 00:17:47.737 "write": true, 00:17:47.737 "unmap": true, 00:17:47.737 "flush": true, 00:17:47.737 "reset": true, 00:17:47.737 "nvme_admin": false, 00:17:47.737 "nvme_io": false, 00:17:47.737 "nvme_io_md": false, 00:17:47.737 "write_zeroes": true, 00:17:47.737 "zcopy": true, 00:17:47.737 "get_zone_info": false, 00:17:47.737 "zone_management": false, 00:17:47.737 "zone_append": false, 00:17:47.737 "compare": false, 00:17:47.737 "compare_and_write": false, 00:17:47.737 "abort": true, 00:17:47.737 "seek_hole": false, 00:17:47.737 "seek_data": false, 00:17:47.737 "copy": true, 00:17:47.737 "nvme_iov_md": false 00:17:47.737 }, 00:17:47.737 "memory_domains": [ 00:17:47.737 { 00:17:47.737 "dma_device_id": "system", 00:17:47.737 "dma_device_type": 1 00:17:47.737 }, 00:17:47.737 { 00:17:47.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.737 "dma_device_type": 2 00:17:47.737 } 00:17:47.737 ], 00:17:47.737 "driver_specific": {} 00:17:47.737 } 00:17:47.737 ] 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.737 "name": "Existed_Raid", 00:17:47.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.737 "strip_size_kb": 64, 00:17:47.737 "state": "configuring", 00:17:47.737 "raid_level": "concat", 00:17:47.737 "superblock": false, 00:17:47.737 "num_base_bdevs": 4, 00:17:47.737 "num_base_bdevs_discovered": 1, 00:17:47.737 "num_base_bdevs_operational": 4, 00:17:47.737 "base_bdevs_list": [ 00:17:47.737 { 00:17:47.737 "name": "BaseBdev1", 00:17:47.737 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:47.737 "is_configured": true, 00:17:47.737 "data_offset": 0, 00:17:47.737 "data_size": 65536 00:17:47.737 }, 00:17:47.737 { 00:17:47.737 "name": "BaseBdev2", 00:17:47.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.737 "is_configured": false, 00:17:47.737 "data_offset": 0, 00:17:47.737 "data_size": 0 00:17:47.737 }, 00:17:47.737 { 00:17:47.737 "name": "BaseBdev3", 00:17:47.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.737 "is_configured": false, 00:17:47.737 "data_offset": 0, 00:17:47.737 "data_size": 0 00:17:47.737 }, 00:17:47.737 { 00:17:47.737 "name": "BaseBdev4", 00:17:47.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.737 "is_configured": false, 00:17:47.737 "data_offset": 0, 00:17:47.737 "data_size": 0 00:17:47.737 } 00:17:47.737 ] 00:17:47.737 }' 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.737 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.306 [2024-12-09 22:59:03.917421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.306 [2024-12-09 22:59:03.917501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.306 [2024-12-09 22:59:03.929496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.306 [2024-12-09 22:59:03.931590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.306 [2024-12-09 22:59:03.931639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.306 [2024-12-09 22:59:03.931650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:48.306 [2024-12-09 22:59:03.931663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:48.306 [2024-12-09 22:59:03.931672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:48.306 [2024-12-09 22:59:03.931682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:48.306 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.307 "name": "Existed_Raid", 00:17:48.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.307 "strip_size_kb": 64, 00:17:48.307 "state": "configuring", 00:17:48.307 "raid_level": "concat", 00:17:48.307 "superblock": false, 00:17:48.307 "num_base_bdevs": 4, 00:17:48.307 "num_base_bdevs_discovered": 1, 00:17:48.307 "num_base_bdevs_operational": 4, 00:17:48.307 "base_bdevs_list": [ 00:17:48.307 { 00:17:48.307 "name": "BaseBdev1", 00:17:48.307 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:48.307 "is_configured": true, 00:17:48.307 "data_offset": 0, 00:17:48.307 "data_size": 65536 00:17:48.307 }, 00:17:48.307 { 00:17:48.307 "name": "BaseBdev2", 00:17:48.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.307 "is_configured": false, 00:17:48.307 "data_offset": 0, 00:17:48.307 "data_size": 0 00:17:48.307 }, 00:17:48.307 { 00:17:48.307 "name": "BaseBdev3", 00:17:48.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.307 "is_configured": false, 00:17:48.307 "data_offset": 0, 00:17:48.307 "data_size": 0 00:17:48.307 }, 00:17:48.307 { 00:17:48.307 "name": "BaseBdev4", 00:17:48.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.307 "is_configured": false, 00:17:48.307 "data_offset": 0, 00:17:48.307 "data_size": 0 00:17:48.307 } 00:17:48.307 ] 00:17:48.307 }' 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.307 22:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.566 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.566 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.566 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.826 [2024-12-09 22:59:04.427841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.826 BaseBdev2 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.826 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.827 [ 00:17:48.827 { 00:17:48.827 "name": "BaseBdev2", 00:17:48.827 "aliases": [ 00:17:48.827 "a4247a7f-7255-48d3-ae1f-fd3b37cee787" 00:17:48.827 ], 00:17:48.827 "product_name": "Malloc disk", 00:17:48.827 "block_size": 512, 00:17:48.827 "num_blocks": 65536, 00:17:48.827 "uuid": "a4247a7f-7255-48d3-ae1f-fd3b37cee787", 00:17:48.827 "assigned_rate_limits": { 00:17:48.827 "rw_ios_per_sec": 0, 00:17:48.827 "rw_mbytes_per_sec": 0, 00:17:48.827 "r_mbytes_per_sec": 0, 00:17:48.827 "w_mbytes_per_sec": 0 00:17:48.827 }, 00:17:48.827 "claimed": true, 00:17:48.827 "claim_type": "exclusive_write", 00:17:48.827 "zoned": false, 00:17:48.827 "supported_io_types": { 00:17:48.827 "read": true, 00:17:48.827 "write": true, 00:17:48.827 "unmap": true, 00:17:48.827 "flush": true, 00:17:48.827 "reset": true, 00:17:48.827 "nvme_admin": false, 00:17:48.827 "nvme_io": false, 00:17:48.827 "nvme_io_md": false, 00:17:48.827 "write_zeroes": true, 00:17:48.827 "zcopy": true, 00:17:48.827 "get_zone_info": false, 00:17:48.827 "zone_management": false, 00:17:48.827 "zone_append": false, 00:17:48.827 "compare": false, 00:17:48.827 "compare_and_write": false, 00:17:48.827 "abort": true, 00:17:48.827 "seek_hole": false, 00:17:48.827 "seek_data": false, 00:17:48.827 "copy": true, 00:17:48.827 "nvme_iov_md": false 00:17:48.827 }, 00:17:48.827 "memory_domains": [ 00:17:48.827 { 00:17:48.827 "dma_device_id": "system", 00:17:48.827 "dma_device_type": 1 00:17:48.827 }, 00:17:48.827 { 00:17:48.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.827 "dma_device_type": 2 00:17:48.827 } 00:17:48.827 ], 00:17:48.827 "driver_specific": {} 00:17:48.827 } 00:17:48.827 ] 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.827 "name": "Existed_Raid", 00:17:48.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.827 "strip_size_kb": 64, 00:17:48.827 "state": "configuring", 00:17:48.827 "raid_level": "concat", 00:17:48.827 "superblock": false, 00:17:48.827 "num_base_bdevs": 4, 00:17:48.827 "num_base_bdevs_discovered": 2, 00:17:48.827 "num_base_bdevs_operational": 4, 00:17:48.827 "base_bdevs_list": [ 00:17:48.827 { 00:17:48.827 "name": "BaseBdev1", 00:17:48.827 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:48.827 "is_configured": true, 00:17:48.827 "data_offset": 0, 00:17:48.827 "data_size": 65536 00:17:48.827 }, 00:17:48.827 { 00:17:48.827 "name": "BaseBdev2", 00:17:48.827 "uuid": "a4247a7f-7255-48d3-ae1f-fd3b37cee787", 00:17:48.827 "is_configured": true, 00:17:48.827 "data_offset": 0, 00:17:48.827 "data_size": 65536 00:17:48.827 }, 00:17:48.827 { 00:17:48.827 "name": "BaseBdev3", 00:17:48.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.827 "is_configured": false, 00:17:48.827 "data_offset": 0, 00:17:48.827 "data_size": 0 00:17:48.827 }, 00:17:48.827 { 00:17:48.827 "name": "BaseBdev4", 00:17:48.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.827 "is_configured": false, 00:17:48.827 "data_offset": 0, 00:17:48.827 "data_size": 0 00:17:48.827 } 00:17:48.827 ] 00:17:48.827 }' 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.827 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.086 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:49.086 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.086 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.350 [2024-12-09 22:59:04.962532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.350 BaseBdev3 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.350 [ 00:17:49.350 { 00:17:49.350 "name": "BaseBdev3", 00:17:49.350 "aliases": [ 00:17:49.350 "6e7a3dec-995a-4aff-a4fb-cd3926473f16" 00:17:49.350 ], 00:17:49.350 "product_name": "Malloc disk", 00:17:49.350 "block_size": 512, 00:17:49.350 "num_blocks": 65536, 00:17:49.350 "uuid": "6e7a3dec-995a-4aff-a4fb-cd3926473f16", 00:17:49.350 "assigned_rate_limits": { 00:17:49.350 "rw_ios_per_sec": 0, 00:17:49.350 "rw_mbytes_per_sec": 0, 00:17:49.350 "r_mbytes_per_sec": 0, 00:17:49.350 "w_mbytes_per_sec": 0 00:17:49.350 }, 00:17:49.350 "claimed": true, 00:17:49.350 "claim_type": "exclusive_write", 00:17:49.350 "zoned": false, 00:17:49.350 "supported_io_types": { 00:17:49.350 "read": true, 00:17:49.350 "write": true, 00:17:49.350 "unmap": true, 00:17:49.350 "flush": true, 00:17:49.350 "reset": true, 00:17:49.350 "nvme_admin": false, 00:17:49.350 "nvme_io": false, 00:17:49.350 "nvme_io_md": false, 00:17:49.350 "write_zeroes": true, 00:17:49.350 "zcopy": true, 00:17:49.350 "get_zone_info": false, 00:17:49.350 "zone_management": false, 00:17:49.350 "zone_append": false, 00:17:49.350 "compare": false, 00:17:49.350 "compare_and_write": false, 00:17:49.350 "abort": true, 00:17:49.350 "seek_hole": false, 00:17:49.350 "seek_data": false, 00:17:49.350 "copy": true, 00:17:49.350 "nvme_iov_md": false 00:17:49.350 }, 00:17:49.350 "memory_domains": [ 00:17:49.350 { 00:17:49.350 "dma_device_id": "system", 00:17:49.350 "dma_device_type": 1 00:17:49.350 }, 00:17:49.350 { 00:17:49.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.350 "dma_device_type": 2 00:17:49.350 } 00:17:49.350 ], 00:17:49.350 "driver_specific": {} 00:17:49.350 } 00:17:49.350 ] 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.350 22:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.350 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.350 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.350 "name": "Existed_Raid", 00:17:49.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.350 "strip_size_kb": 64, 00:17:49.350 "state": "configuring", 00:17:49.350 "raid_level": "concat", 00:17:49.350 "superblock": false, 00:17:49.350 "num_base_bdevs": 4, 00:17:49.350 "num_base_bdevs_discovered": 3, 00:17:49.350 "num_base_bdevs_operational": 4, 00:17:49.350 "base_bdevs_list": [ 00:17:49.350 { 00:17:49.350 "name": "BaseBdev1", 00:17:49.350 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:49.350 "is_configured": true, 00:17:49.350 "data_offset": 0, 00:17:49.350 "data_size": 65536 00:17:49.350 }, 00:17:49.350 { 00:17:49.350 "name": "BaseBdev2", 00:17:49.350 "uuid": "a4247a7f-7255-48d3-ae1f-fd3b37cee787", 00:17:49.350 "is_configured": true, 00:17:49.350 "data_offset": 0, 00:17:49.350 "data_size": 65536 00:17:49.350 }, 00:17:49.350 { 00:17:49.350 "name": "BaseBdev3", 00:17:49.350 "uuid": "6e7a3dec-995a-4aff-a4fb-cd3926473f16", 00:17:49.350 "is_configured": true, 00:17:49.350 "data_offset": 0, 00:17:49.350 "data_size": 65536 00:17:49.350 }, 00:17:49.350 { 00:17:49.350 "name": "BaseBdev4", 00:17:49.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.350 "is_configured": false, 00:17:49.350 "data_offset": 0, 00:17:49.350 "data_size": 0 00:17:49.350 } 00:17:49.350 ] 00:17:49.350 }' 00:17:49.350 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.350 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.610 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:49.610 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.610 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.869 [2024-12-09 22:59:05.491635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.869 [2024-12-09 22:59:05.491813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.869 [2024-12-09 22:59:05.491850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:49.869 [2024-12-09 22:59:05.492294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:49.869 [2024-12-09 22:59:05.492585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.869 [2024-12-09 22:59:05.492640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:49.869 [2024-12-09 22:59:05.493006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.869 BaseBdev4 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.869 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.870 [ 00:17:49.870 { 00:17:49.870 "name": "BaseBdev4", 00:17:49.870 "aliases": [ 00:17:49.870 "32d740d9-89f2-467e-becf-f53bb8eccf2c" 00:17:49.870 ], 00:17:49.870 "product_name": "Malloc disk", 00:17:49.870 "block_size": 512, 00:17:49.870 "num_blocks": 65536, 00:17:49.870 "uuid": "32d740d9-89f2-467e-becf-f53bb8eccf2c", 00:17:49.870 "assigned_rate_limits": { 00:17:49.870 "rw_ios_per_sec": 0, 00:17:49.870 "rw_mbytes_per_sec": 0, 00:17:49.870 "r_mbytes_per_sec": 0, 00:17:49.870 "w_mbytes_per_sec": 0 00:17:49.870 }, 00:17:49.870 "claimed": true, 00:17:49.870 "claim_type": "exclusive_write", 00:17:49.870 "zoned": false, 00:17:49.870 "supported_io_types": { 00:17:49.870 "read": true, 00:17:49.870 "write": true, 00:17:49.870 "unmap": true, 00:17:49.870 "flush": true, 00:17:49.870 "reset": true, 00:17:49.870 "nvme_admin": false, 00:17:49.870 "nvme_io": false, 00:17:49.870 "nvme_io_md": false, 00:17:49.870 "write_zeroes": true, 00:17:49.870 "zcopy": true, 00:17:49.870 "get_zone_info": false, 00:17:49.870 "zone_management": false, 00:17:49.870 "zone_append": false, 00:17:49.870 "compare": false, 00:17:49.870 "compare_and_write": false, 00:17:49.870 "abort": true, 00:17:49.870 "seek_hole": false, 00:17:49.870 "seek_data": false, 00:17:49.870 "copy": true, 00:17:49.870 "nvme_iov_md": false 00:17:49.870 }, 00:17:49.870 "memory_domains": [ 00:17:49.870 { 00:17:49.870 "dma_device_id": "system", 00:17:49.870 "dma_device_type": 1 00:17:49.870 }, 00:17:49.870 { 00:17:49.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.870 "dma_device_type": 2 00:17:49.870 } 00:17:49.870 ], 00:17:49.870 "driver_specific": {} 00:17:49.870 } 00:17:49.870 ] 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.870 "name": "Existed_Raid", 00:17:49.870 "uuid": "16239c6a-94aa-4cb1-8775-de77b8388eec", 00:17:49.870 "strip_size_kb": 64, 00:17:49.870 "state": "online", 00:17:49.870 "raid_level": "concat", 00:17:49.870 "superblock": false, 00:17:49.870 "num_base_bdevs": 4, 00:17:49.870 "num_base_bdevs_discovered": 4, 00:17:49.870 "num_base_bdevs_operational": 4, 00:17:49.870 "base_bdevs_list": [ 00:17:49.870 { 00:17:49.870 "name": "BaseBdev1", 00:17:49.870 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:49.870 "is_configured": true, 00:17:49.870 "data_offset": 0, 00:17:49.870 "data_size": 65536 00:17:49.870 }, 00:17:49.870 { 00:17:49.870 "name": "BaseBdev2", 00:17:49.870 "uuid": "a4247a7f-7255-48d3-ae1f-fd3b37cee787", 00:17:49.870 "is_configured": true, 00:17:49.870 "data_offset": 0, 00:17:49.870 "data_size": 65536 00:17:49.870 }, 00:17:49.870 { 00:17:49.870 "name": "BaseBdev3", 00:17:49.870 "uuid": "6e7a3dec-995a-4aff-a4fb-cd3926473f16", 00:17:49.870 "is_configured": true, 00:17:49.870 "data_offset": 0, 00:17:49.870 "data_size": 65536 00:17:49.870 }, 00:17:49.870 { 00:17:49.870 "name": "BaseBdev4", 00:17:49.870 "uuid": "32d740d9-89f2-467e-becf-f53bb8eccf2c", 00:17:49.870 "is_configured": true, 00:17:49.870 "data_offset": 0, 00:17:49.870 "data_size": 65536 00:17:49.870 } 00:17:49.870 ] 00:17:49.870 }' 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.870 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.129 22:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.388 [2024-12-09 22:59:05.987296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.388 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.388 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.388 "name": "Existed_Raid", 00:17:50.388 "aliases": [ 00:17:50.388 "16239c6a-94aa-4cb1-8775-de77b8388eec" 00:17:50.388 ], 00:17:50.388 "product_name": "Raid Volume", 00:17:50.388 "block_size": 512, 00:17:50.388 "num_blocks": 262144, 00:17:50.388 "uuid": "16239c6a-94aa-4cb1-8775-de77b8388eec", 00:17:50.388 "assigned_rate_limits": { 00:17:50.388 "rw_ios_per_sec": 0, 00:17:50.388 "rw_mbytes_per_sec": 0, 00:17:50.388 "r_mbytes_per_sec": 0, 00:17:50.388 "w_mbytes_per_sec": 0 00:17:50.388 }, 00:17:50.388 "claimed": false, 00:17:50.388 "zoned": false, 00:17:50.388 "supported_io_types": { 00:17:50.388 "read": true, 00:17:50.388 "write": true, 00:17:50.388 "unmap": true, 00:17:50.388 "flush": true, 00:17:50.388 "reset": true, 00:17:50.388 "nvme_admin": false, 00:17:50.388 "nvme_io": false, 00:17:50.388 "nvme_io_md": false, 00:17:50.388 "write_zeroes": true, 00:17:50.388 "zcopy": false, 00:17:50.388 "get_zone_info": false, 00:17:50.388 "zone_management": false, 00:17:50.388 "zone_append": false, 00:17:50.388 "compare": false, 00:17:50.388 "compare_and_write": false, 00:17:50.388 "abort": false, 00:17:50.388 "seek_hole": false, 00:17:50.388 "seek_data": false, 00:17:50.388 "copy": false, 00:17:50.388 "nvme_iov_md": false 00:17:50.388 }, 00:17:50.388 "memory_domains": [ 00:17:50.388 { 00:17:50.389 "dma_device_id": "system", 00:17:50.389 "dma_device_type": 1 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.389 "dma_device_type": 2 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "system", 00:17:50.389 "dma_device_type": 1 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.389 "dma_device_type": 2 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "system", 00:17:50.389 "dma_device_type": 1 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.389 "dma_device_type": 2 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "system", 00:17:50.389 "dma_device_type": 1 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.389 "dma_device_type": 2 00:17:50.389 } 00:17:50.389 ], 00:17:50.389 "driver_specific": { 00:17:50.389 "raid": { 00:17:50.389 "uuid": "16239c6a-94aa-4cb1-8775-de77b8388eec", 00:17:50.389 "strip_size_kb": 64, 00:17:50.389 "state": "online", 00:17:50.389 "raid_level": "concat", 00:17:50.389 "superblock": false, 00:17:50.389 "num_base_bdevs": 4, 00:17:50.389 "num_base_bdevs_discovered": 4, 00:17:50.389 "num_base_bdevs_operational": 4, 00:17:50.389 "base_bdevs_list": [ 00:17:50.389 { 00:17:50.389 "name": "BaseBdev1", 00:17:50.389 "uuid": "c9e36798-02c2-43b0-8e53-a416a9565a98", 00:17:50.389 "is_configured": true, 00:17:50.389 "data_offset": 0, 00:17:50.389 "data_size": 65536 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "name": "BaseBdev2", 00:17:50.389 "uuid": "a4247a7f-7255-48d3-ae1f-fd3b37cee787", 00:17:50.389 "is_configured": true, 00:17:50.389 "data_offset": 0, 00:17:50.389 "data_size": 65536 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "name": "BaseBdev3", 00:17:50.389 "uuid": "6e7a3dec-995a-4aff-a4fb-cd3926473f16", 00:17:50.389 "is_configured": true, 00:17:50.389 "data_offset": 0, 00:17:50.389 "data_size": 65536 00:17:50.389 }, 00:17:50.389 { 00:17:50.389 "name": "BaseBdev4", 00:17:50.389 "uuid": "32d740d9-89f2-467e-becf-f53bb8eccf2c", 00:17:50.389 "is_configured": true, 00:17:50.389 "data_offset": 0, 00:17:50.389 "data_size": 65536 00:17:50.389 } 00:17:50.389 ] 00:17:50.389 } 00:17:50.389 } 00:17:50.389 }' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:50.389 BaseBdev2 00:17:50.389 BaseBdev3 00:17:50.389 BaseBdev4' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.389 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.649 [2024-12-09 22:59:06.282521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.649 [2024-12-09 22:59:06.282552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.649 [2024-12-09 22:59:06.282611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.649 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.649 "name": "Existed_Raid", 00:17:50.649 "uuid": "16239c6a-94aa-4cb1-8775-de77b8388eec", 00:17:50.649 "strip_size_kb": 64, 00:17:50.649 "state": "offline", 00:17:50.649 "raid_level": "concat", 00:17:50.649 "superblock": false, 00:17:50.649 "num_base_bdevs": 4, 00:17:50.649 "num_base_bdevs_discovered": 3, 00:17:50.649 "num_base_bdevs_operational": 3, 00:17:50.649 "base_bdevs_list": [ 00:17:50.649 { 00:17:50.649 "name": null, 00:17:50.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.649 "is_configured": false, 00:17:50.649 "data_offset": 0, 00:17:50.649 "data_size": 65536 00:17:50.649 }, 00:17:50.649 { 00:17:50.649 "name": "BaseBdev2", 00:17:50.649 "uuid": "a4247a7f-7255-48d3-ae1f-fd3b37cee787", 00:17:50.649 "is_configured": true, 00:17:50.649 "data_offset": 0, 00:17:50.649 "data_size": 65536 00:17:50.649 }, 00:17:50.649 { 00:17:50.649 "name": "BaseBdev3", 00:17:50.649 "uuid": "6e7a3dec-995a-4aff-a4fb-cd3926473f16", 00:17:50.649 "is_configured": true, 00:17:50.649 "data_offset": 0, 00:17:50.649 "data_size": 65536 00:17:50.649 }, 00:17:50.649 { 00:17:50.649 "name": "BaseBdev4", 00:17:50.649 "uuid": "32d740d9-89f2-467e-becf-f53bb8eccf2c", 00:17:50.649 "is_configured": true, 00:17:50.649 "data_offset": 0, 00:17:50.649 "data_size": 65536 00:17:50.649 } 00:17:50.649 ] 00:17:50.649 }' 00:17:50.650 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.650 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.216 22:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.216 [2024-12-09 22:59:06.955591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 [2024-12-09 22:59:07.127885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.475 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.475 [2024-12-09 22:59:07.304943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:51.475 [2024-12-09 22:59:07.305010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 BaseBdev2 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 [ 00:17:51.734 { 00:17:51.734 "name": "BaseBdev2", 00:17:51.734 "aliases": [ 00:17:51.734 "9a3926f9-1a13-4bb6-851b-950df15b7ac3" 00:17:51.734 ], 00:17:51.734 "product_name": "Malloc disk", 00:17:51.734 "block_size": 512, 00:17:51.734 "num_blocks": 65536, 00:17:51.734 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:51.734 "assigned_rate_limits": { 00:17:51.734 "rw_ios_per_sec": 0, 00:17:51.734 "rw_mbytes_per_sec": 0, 00:17:51.734 "r_mbytes_per_sec": 0, 00:17:51.734 "w_mbytes_per_sec": 0 00:17:51.734 }, 00:17:51.734 "claimed": false, 00:17:51.734 "zoned": false, 00:17:51.734 "supported_io_types": { 00:17:51.734 "read": true, 00:17:51.734 "write": true, 00:17:51.734 "unmap": true, 00:17:51.734 "flush": true, 00:17:51.734 "reset": true, 00:17:51.734 "nvme_admin": false, 00:17:51.734 "nvme_io": false, 00:17:51.734 "nvme_io_md": false, 00:17:51.734 "write_zeroes": true, 00:17:51.734 "zcopy": true, 00:17:51.734 "get_zone_info": false, 00:17:51.734 "zone_management": false, 00:17:51.734 "zone_append": false, 00:17:51.734 "compare": false, 00:17:51.734 "compare_and_write": false, 00:17:51.734 "abort": true, 00:17:51.734 "seek_hole": false, 00:17:51.734 "seek_data": false, 00:17:51.734 "copy": true, 00:17:51.734 "nvme_iov_md": false 00:17:51.734 }, 00:17:51.734 "memory_domains": [ 00:17:51.734 { 00:17:51.734 "dma_device_id": "system", 00:17:51.734 "dma_device_type": 1 00:17:51.734 }, 00:17:51.734 { 00:17:51.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.734 "dma_device_type": 2 00:17:51.734 } 00:17:51.734 ], 00:17:51.734 "driver_specific": {} 00:17:51.734 } 00:17:51.734 ] 00:17:51.734 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.735 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:51.735 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.735 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.735 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.735 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.735 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.994 BaseBdev3 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.994 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.994 [ 00:17:51.994 { 00:17:51.994 "name": "BaseBdev3", 00:17:51.994 "aliases": [ 00:17:51.994 "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14" 00:17:51.994 ], 00:17:51.994 "product_name": "Malloc disk", 00:17:51.994 "block_size": 512, 00:17:51.994 "num_blocks": 65536, 00:17:51.994 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:51.994 "assigned_rate_limits": { 00:17:51.994 "rw_ios_per_sec": 0, 00:17:51.994 "rw_mbytes_per_sec": 0, 00:17:51.994 "r_mbytes_per_sec": 0, 00:17:51.994 "w_mbytes_per_sec": 0 00:17:51.994 }, 00:17:51.994 "claimed": false, 00:17:51.994 "zoned": false, 00:17:51.994 "supported_io_types": { 00:17:51.994 "read": true, 00:17:51.994 "write": true, 00:17:51.994 "unmap": true, 00:17:51.994 "flush": true, 00:17:51.994 "reset": true, 00:17:51.994 "nvme_admin": false, 00:17:51.994 "nvme_io": false, 00:17:51.994 "nvme_io_md": false, 00:17:51.994 "write_zeroes": true, 00:17:51.994 "zcopy": true, 00:17:51.994 "get_zone_info": false, 00:17:51.994 "zone_management": false, 00:17:51.994 "zone_append": false, 00:17:51.994 "compare": false, 00:17:51.994 "compare_and_write": false, 00:17:51.994 "abort": true, 00:17:51.994 "seek_hole": false, 00:17:51.994 "seek_data": false, 00:17:51.994 "copy": true, 00:17:51.994 "nvme_iov_md": false 00:17:51.994 }, 00:17:51.994 "memory_domains": [ 00:17:51.994 { 00:17:51.994 "dma_device_id": "system", 00:17:51.994 "dma_device_type": 1 00:17:51.994 }, 00:17:51.995 { 00:17:51.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.995 "dma_device_type": 2 00:17:51.995 } 00:17:51.995 ], 00:17:51.995 "driver_specific": {} 00:17:51.995 } 00:17:51.995 ] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 BaseBdev4 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 [ 00:17:51.995 { 00:17:51.995 "name": "BaseBdev4", 00:17:51.995 "aliases": [ 00:17:51.995 "705d3117-743d-4902-9f42-81266483b375" 00:17:51.995 ], 00:17:51.995 "product_name": "Malloc disk", 00:17:51.995 "block_size": 512, 00:17:51.995 "num_blocks": 65536, 00:17:51.995 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:51.995 "assigned_rate_limits": { 00:17:51.995 "rw_ios_per_sec": 0, 00:17:51.995 "rw_mbytes_per_sec": 0, 00:17:51.995 "r_mbytes_per_sec": 0, 00:17:51.995 "w_mbytes_per_sec": 0 00:17:51.995 }, 00:17:51.995 "claimed": false, 00:17:51.995 "zoned": false, 00:17:51.995 "supported_io_types": { 00:17:51.995 "read": true, 00:17:51.995 "write": true, 00:17:51.995 "unmap": true, 00:17:51.995 "flush": true, 00:17:51.995 "reset": true, 00:17:51.995 "nvme_admin": false, 00:17:51.995 "nvme_io": false, 00:17:51.995 "nvme_io_md": false, 00:17:51.995 "write_zeroes": true, 00:17:51.995 "zcopy": true, 00:17:51.995 "get_zone_info": false, 00:17:51.995 "zone_management": false, 00:17:51.995 "zone_append": false, 00:17:51.995 "compare": false, 00:17:51.995 "compare_and_write": false, 00:17:51.995 "abort": true, 00:17:51.995 "seek_hole": false, 00:17:51.995 "seek_data": false, 00:17:51.995 "copy": true, 00:17:51.995 "nvme_iov_md": false 00:17:51.995 }, 00:17:51.995 "memory_domains": [ 00:17:51.995 { 00:17:51.995 "dma_device_id": "system", 00:17:51.995 "dma_device_type": 1 00:17:51.995 }, 00:17:51.995 { 00:17:51.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.995 "dma_device_type": 2 00:17:51.995 } 00:17:51.995 ], 00:17:51.995 "driver_specific": {} 00:17:51.995 } 00:17:51.995 ] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 [2024-12-09 22:59:07.729718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.995 [2024-12-09 22:59:07.729822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.995 [2024-12-09 22:59:07.729883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.995 [2024-12-09 22:59:07.732076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.995 [2024-12-09 22:59:07.732184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.995 "name": "Existed_Raid", 00:17:51.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.995 "strip_size_kb": 64, 00:17:51.995 "state": "configuring", 00:17:51.995 "raid_level": "concat", 00:17:51.995 "superblock": false, 00:17:51.995 "num_base_bdevs": 4, 00:17:51.995 "num_base_bdevs_discovered": 3, 00:17:51.995 "num_base_bdevs_operational": 4, 00:17:51.995 "base_bdevs_list": [ 00:17:51.995 { 00:17:51.995 "name": "BaseBdev1", 00:17:51.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.995 "is_configured": false, 00:17:51.995 "data_offset": 0, 00:17:51.995 "data_size": 0 00:17:51.995 }, 00:17:51.995 { 00:17:51.995 "name": "BaseBdev2", 00:17:51.995 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:51.995 "is_configured": true, 00:17:51.995 "data_offset": 0, 00:17:51.995 "data_size": 65536 00:17:51.995 }, 00:17:51.995 { 00:17:51.995 "name": "BaseBdev3", 00:17:51.995 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:51.995 "is_configured": true, 00:17:51.995 "data_offset": 0, 00:17:51.995 "data_size": 65536 00:17:51.995 }, 00:17:51.995 { 00:17:51.995 "name": "BaseBdev4", 00:17:51.995 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:51.995 "is_configured": true, 00:17:51.995 "data_offset": 0, 00:17:51.995 "data_size": 65536 00:17:51.995 } 00:17:51.995 ] 00:17:51.995 }' 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.995 22:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.562 [2024-12-09 22:59:08.213054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.562 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.562 "name": "Existed_Raid", 00:17:52.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.562 "strip_size_kb": 64, 00:17:52.562 "state": "configuring", 00:17:52.562 "raid_level": "concat", 00:17:52.562 "superblock": false, 00:17:52.562 "num_base_bdevs": 4, 00:17:52.562 "num_base_bdevs_discovered": 2, 00:17:52.562 "num_base_bdevs_operational": 4, 00:17:52.562 "base_bdevs_list": [ 00:17:52.562 { 00:17:52.562 "name": "BaseBdev1", 00:17:52.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.562 "is_configured": false, 00:17:52.562 "data_offset": 0, 00:17:52.562 "data_size": 0 00:17:52.562 }, 00:17:52.562 { 00:17:52.562 "name": null, 00:17:52.563 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:52.563 "is_configured": false, 00:17:52.563 "data_offset": 0, 00:17:52.563 "data_size": 65536 00:17:52.563 }, 00:17:52.563 { 00:17:52.563 "name": "BaseBdev3", 00:17:52.563 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:52.563 "is_configured": true, 00:17:52.563 "data_offset": 0, 00:17:52.563 "data_size": 65536 00:17:52.563 }, 00:17:52.563 { 00:17:52.563 "name": "BaseBdev4", 00:17:52.563 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:52.563 "is_configured": true, 00:17:52.563 "data_offset": 0, 00:17:52.563 "data_size": 65536 00:17:52.563 } 00:17:52.563 ] 00:17:52.563 }' 00:17:52.563 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.563 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.821 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.821 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.821 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.821 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.821 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.081 [2024-12-09 22:59:08.744263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.081 BaseBdev1 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.081 [ 00:17:53.081 { 00:17:53.081 "name": "BaseBdev1", 00:17:53.081 "aliases": [ 00:17:53.081 "de89747f-00b9-4733-8c03-95e655b33c2d" 00:17:53.081 ], 00:17:53.081 "product_name": "Malloc disk", 00:17:53.081 "block_size": 512, 00:17:53.081 "num_blocks": 65536, 00:17:53.081 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:53.081 "assigned_rate_limits": { 00:17:53.081 "rw_ios_per_sec": 0, 00:17:53.081 "rw_mbytes_per_sec": 0, 00:17:53.081 "r_mbytes_per_sec": 0, 00:17:53.081 "w_mbytes_per_sec": 0 00:17:53.081 }, 00:17:53.081 "claimed": true, 00:17:53.081 "claim_type": "exclusive_write", 00:17:53.081 "zoned": false, 00:17:53.081 "supported_io_types": { 00:17:53.081 "read": true, 00:17:53.081 "write": true, 00:17:53.081 "unmap": true, 00:17:53.081 "flush": true, 00:17:53.081 "reset": true, 00:17:53.081 "nvme_admin": false, 00:17:53.081 "nvme_io": false, 00:17:53.081 "nvme_io_md": false, 00:17:53.081 "write_zeroes": true, 00:17:53.081 "zcopy": true, 00:17:53.081 "get_zone_info": false, 00:17:53.081 "zone_management": false, 00:17:53.081 "zone_append": false, 00:17:53.081 "compare": false, 00:17:53.081 "compare_and_write": false, 00:17:53.081 "abort": true, 00:17:53.081 "seek_hole": false, 00:17:53.081 "seek_data": false, 00:17:53.081 "copy": true, 00:17:53.081 "nvme_iov_md": false 00:17:53.081 }, 00:17:53.081 "memory_domains": [ 00:17:53.081 { 00:17:53.081 "dma_device_id": "system", 00:17:53.081 "dma_device_type": 1 00:17:53.081 }, 00:17:53.081 { 00:17:53.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.081 "dma_device_type": 2 00:17:53.081 } 00:17:53.081 ], 00:17:53.081 "driver_specific": {} 00:17:53.081 } 00:17:53.081 ] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.081 "name": "Existed_Raid", 00:17:53.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.081 "strip_size_kb": 64, 00:17:53.081 "state": "configuring", 00:17:53.081 "raid_level": "concat", 00:17:53.081 "superblock": false, 00:17:53.081 "num_base_bdevs": 4, 00:17:53.081 "num_base_bdevs_discovered": 3, 00:17:53.081 "num_base_bdevs_operational": 4, 00:17:53.081 "base_bdevs_list": [ 00:17:53.081 { 00:17:53.081 "name": "BaseBdev1", 00:17:53.081 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:53.081 "is_configured": true, 00:17:53.081 "data_offset": 0, 00:17:53.081 "data_size": 65536 00:17:53.081 }, 00:17:53.081 { 00:17:53.081 "name": null, 00:17:53.081 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:53.081 "is_configured": false, 00:17:53.081 "data_offset": 0, 00:17:53.081 "data_size": 65536 00:17:53.081 }, 00:17:53.081 { 00:17:53.081 "name": "BaseBdev3", 00:17:53.081 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:53.081 "is_configured": true, 00:17:53.081 "data_offset": 0, 00:17:53.081 "data_size": 65536 00:17:53.081 }, 00:17:53.081 { 00:17:53.081 "name": "BaseBdev4", 00:17:53.081 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:53.081 "is_configured": true, 00:17:53.081 "data_offset": 0, 00:17:53.081 "data_size": 65536 00:17:53.081 } 00:17:53.081 ] 00:17:53.081 }' 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.081 22:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.649 [2024-12-09 22:59:09.303649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.649 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.649 "name": "Existed_Raid", 00:17:53.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.650 "strip_size_kb": 64, 00:17:53.650 "state": "configuring", 00:17:53.650 "raid_level": "concat", 00:17:53.650 "superblock": false, 00:17:53.650 "num_base_bdevs": 4, 00:17:53.650 "num_base_bdevs_discovered": 2, 00:17:53.650 "num_base_bdevs_operational": 4, 00:17:53.650 "base_bdevs_list": [ 00:17:53.650 { 00:17:53.650 "name": "BaseBdev1", 00:17:53.650 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:53.650 "is_configured": true, 00:17:53.650 "data_offset": 0, 00:17:53.650 "data_size": 65536 00:17:53.650 }, 00:17:53.650 { 00:17:53.650 "name": null, 00:17:53.650 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:53.650 "is_configured": false, 00:17:53.650 "data_offset": 0, 00:17:53.650 "data_size": 65536 00:17:53.650 }, 00:17:53.650 { 00:17:53.650 "name": null, 00:17:53.650 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:53.650 "is_configured": false, 00:17:53.650 "data_offset": 0, 00:17:53.650 "data_size": 65536 00:17:53.650 }, 00:17:53.650 { 00:17:53.650 "name": "BaseBdev4", 00:17:53.650 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:53.650 "is_configured": true, 00:17:53.650 "data_offset": 0, 00:17:53.650 "data_size": 65536 00:17:53.650 } 00:17:53.650 ] 00:17:53.650 }' 00:17:53.650 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.650 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.909 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.909 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:53.909 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.909 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.167 [2024-12-09 22:59:09.814686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.167 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.167 "name": "Existed_Raid", 00:17:54.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.167 "strip_size_kb": 64, 00:17:54.167 "state": "configuring", 00:17:54.167 "raid_level": "concat", 00:17:54.167 "superblock": false, 00:17:54.168 "num_base_bdevs": 4, 00:17:54.168 "num_base_bdevs_discovered": 3, 00:17:54.168 "num_base_bdevs_operational": 4, 00:17:54.168 "base_bdevs_list": [ 00:17:54.168 { 00:17:54.168 "name": "BaseBdev1", 00:17:54.168 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:54.168 "is_configured": true, 00:17:54.168 "data_offset": 0, 00:17:54.168 "data_size": 65536 00:17:54.168 }, 00:17:54.168 { 00:17:54.168 "name": null, 00:17:54.168 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:54.168 "is_configured": false, 00:17:54.168 "data_offset": 0, 00:17:54.168 "data_size": 65536 00:17:54.168 }, 00:17:54.168 { 00:17:54.168 "name": "BaseBdev3", 00:17:54.168 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:54.168 "is_configured": true, 00:17:54.168 "data_offset": 0, 00:17:54.168 "data_size": 65536 00:17:54.168 }, 00:17:54.168 { 00:17:54.168 "name": "BaseBdev4", 00:17:54.168 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:54.168 "is_configured": true, 00:17:54.168 "data_offset": 0, 00:17:54.168 "data_size": 65536 00:17:54.168 } 00:17:54.168 ] 00:17:54.168 }' 00:17:54.168 22:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.168 22:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.734 [2024-12-09 22:59:10.349890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.734 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.734 "name": "Existed_Raid", 00:17:54.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.734 "strip_size_kb": 64, 00:17:54.734 "state": "configuring", 00:17:54.734 "raid_level": "concat", 00:17:54.734 "superblock": false, 00:17:54.734 "num_base_bdevs": 4, 00:17:54.734 "num_base_bdevs_discovered": 2, 00:17:54.734 "num_base_bdevs_operational": 4, 00:17:54.734 "base_bdevs_list": [ 00:17:54.734 { 00:17:54.734 "name": null, 00:17:54.734 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:54.734 "is_configured": false, 00:17:54.734 "data_offset": 0, 00:17:54.734 "data_size": 65536 00:17:54.734 }, 00:17:54.734 { 00:17:54.734 "name": null, 00:17:54.734 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:54.734 "is_configured": false, 00:17:54.734 "data_offset": 0, 00:17:54.735 "data_size": 65536 00:17:54.735 }, 00:17:54.735 { 00:17:54.735 "name": "BaseBdev3", 00:17:54.735 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:54.735 "is_configured": true, 00:17:54.735 "data_offset": 0, 00:17:54.735 "data_size": 65536 00:17:54.735 }, 00:17:54.735 { 00:17:54.735 "name": "BaseBdev4", 00:17:54.735 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:54.735 "is_configured": true, 00:17:54.735 "data_offset": 0, 00:17:54.735 "data_size": 65536 00:17:54.735 } 00:17:54.735 ] 00:17:54.735 }' 00:17:54.735 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.735 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.303 22:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.303 [2024-12-09 22:59:11.000981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.303 "name": "Existed_Raid", 00:17:55.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.303 "strip_size_kb": 64, 00:17:55.303 "state": "configuring", 00:17:55.303 "raid_level": "concat", 00:17:55.303 "superblock": false, 00:17:55.303 "num_base_bdevs": 4, 00:17:55.303 "num_base_bdevs_discovered": 3, 00:17:55.303 "num_base_bdevs_operational": 4, 00:17:55.303 "base_bdevs_list": [ 00:17:55.303 { 00:17:55.303 "name": null, 00:17:55.303 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:55.303 "is_configured": false, 00:17:55.303 "data_offset": 0, 00:17:55.303 "data_size": 65536 00:17:55.303 }, 00:17:55.303 { 00:17:55.303 "name": "BaseBdev2", 00:17:55.303 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:55.303 "is_configured": true, 00:17:55.303 "data_offset": 0, 00:17:55.303 "data_size": 65536 00:17:55.303 }, 00:17:55.303 { 00:17:55.303 "name": "BaseBdev3", 00:17:55.303 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:55.303 "is_configured": true, 00:17:55.303 "data_offset": 0, 00:17:55.303 "data_size": 65536 00:17:55.303 }, 00:17:55.303 { 00:17:55.303 "name": "BaseBdev4", 00:17:55.303 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:55.303 "is_configured": true, 00:17:55.303 "data_offset": 0, 00:17:55.303 "data_size": 65536 00:17:55.303 } 00:17:55.303 ] 00:17:55.303 }' 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.303 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u de89747f-00b9-4733-8c03-95e655b33c2d 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.868 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.868 [2024-12-09 22:59:11.587818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:55.868 [2024-12-09 22:59:11.587894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:55.868 [2024-12-09 22:59:11.587905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:55.869 [2024-12-09 22:59:11.588243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:55.869 [2024-12-09 22:59:11.588455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:55.869 [2024-12-09 22:59:11.588488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:55.869 [2024-12-09 22:59:11.588850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.869 NewBaseBdev 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.869 [ 00:17:55.869 { 00:17:55.869 "name": "NewBaseBdev", 00:17:55.869 "aliases": [ 00:17:55.869 "de89747f-00b9-4733-8c03-95e655b33c2d" 00:17:55.869 ], 00:17:55.869 "product_name": "Malloc disk", 00:17:55.869 "block_size": 512, 00:17:55.869 "num_blocks": 65536, 00:17:55.869 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:55.869 "assigned_rate_limits": { 00:17:55.869 "rw_ios_per_sec": 0, 00:17:55.869 "rw_mbytes_per_sec": 0, 00:17:55.869 "r_mbytes_per_sec": 0, 00:17:55.869 "w_mbytes_per_sec": 0 00:17:55.869 }, 00:17:55.869 "claimed": true, 00:17:55.869 "claim_type": "exclusive_write", 00:17:55.869 "zoned": false, 00:17:55.869 "supported_io_types": { 00:17:55.869 "read": true, 00:17:55.869 "write": true, 00:17:55.869 "unmap": true, 00:17:55.869 "flush": true, 00:17:55.869 "reset": true, 00:17:55.869 "nvme_admin": false, 00:17:55.869 "nvme_io": false, 00:17:55.869 "nvme_io_md": false, 00:17:55.869 "write_zeroes": true, 00:17:55.869 "zcopy": true, 00:17:55.869 "get_zone_info": false, 00:17:55.869 "zone_management": false, 00:17:55.869 "zone_append": false, 00:17:55.869 "compare": false, 00:17:55.869 "compare_and_write": false, 00:17:55.869 "abort": true, 00:17:55.869 "seek_hole": false, 00:17:55.869 "seek_data": false, 00:17:55.869 "copy": true, 00:17:55.869 "nvme_iov_md": false 00:17:55.869 }, 00:17:55.869 "memory_domains": [ 00:17:55.869 { 00:17:55.869 "dma_device_id": "system", 00:17:55.869 "dma_device_type": 1 00:17:55.869 }, 00:17:55.869 { 00:17:55.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.869 "dma_device_type": 2 00:17:55.869 } 00:17:55.869 ], 00:17:55.869 "driver_specific": {} 00:17:55.869 } 00:17:55.869 ] 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.869 "name": "Existed_Raid", 00:17:55.869 "uuid": "bc09557c-aa0d-4bf9-8803-98589398033a", 00:17:55.869 "strip_size_kb": 64, 00:17:55.869 "state": "online", 00:17:55.869 "raid_level": "concat", 00:17:55.869 "superblock": false, 00:17:55.869 "num_base_bdevs": 4, 00:17:55.869 "num_base_bdevs_discovered": 4, 00:17:55.869 "num_base_bdevs_operational": 4, 00:17:55.869 "base_bdevs_list": [ 00:17:55.869 { 00:17:55.869 "name": "NewBaseBdev", 00:17:55.869 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:55.869 "is_configured": true, 00:17:55.869 "data_offset": 0, 00:17:55.869 "data_size": 65536 00:17:55.869 }, 00:17:55.869 { 00:17:55.869 "name": "BaseBdev2", 00:17:55.869 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:55.869 "is_configured": true, 00:17:55.869 "data_offset": 0, 00:17:55.869 "data_size": 65536 00:17:55.869 }, 00:17:55.869 { 00:17:55.869 "name": "BaseBdev3", 00:17:55.869 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:55.869 "is_configured": true, 00:17:55.869 "data_offset": 0, 00:17:55.869 "data_size": 65536 00:17:55.869 }, 00:17:55.869 { 00:17:55.869 "name": "BaseBdev4", 00:17:55.869 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:55.869 "is_configured": true, 00:17:55.869 "data_offset": 0, 00:17:55.869 "data_size": 65536 00:17:55.869 } 00:17:55.869 ] 00:17:55.869 }' 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.869 22:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.435 [2024-12-09 22:59:12.119471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.435 "name": "Existed_Raid", 00:17:56.435 "aliases": [ 00:17:56.435 "bc09557c-aa0d-4bf9-8803-98589398033a" 00:17:56.435 ], 00:17:56.435 "product_name": "Raid Volume", 00:17:56.435 "block_size": 512, 00:17:56.435 "num_blocks": 262144, 00:17:56.435 "uuid": "bc09557c-aa0d-4bf9-8803-98589398033a", 00:17:56.435 "assigned_rate_limits": { 00:17:56.435 "rw_ios_per_sec": 0, 00:17:56.435 "rw_mbytes_per_sec": 0, 00:17:56.435 "r_mbytes_per_sec": 0, 00:17:56.435 "w_mbytes_per_sec": 0 00:17:56.435 }, 00:17:56.435 "claimed": false, 00:17:56.435 "zoned": false, 00:17:56.435 "supported_io_types": { 00:17:56.435 "read": true, 00:17:56.435 "write": true, 00:17:56.435 "unmap": true, 00:17:56.435 "flush": true, 00:17:56.435 "reset": true, 00:17:56.435 "nvme_admin": false, 00:17:56.435 "nvme_io": false, 00:17:56.435 "nvme_io_md": false, 00:17:56.435 "write_zeroes": true, 00:17:56.435 "zcopy": false, 00:17:56.435 "get_zone_info": false, 00:17:56.435 "zone_management": false, 00:17:56.435 "zone_append": false, 00:17:56.435 "compare": false, 00:17:56.435 "compare_and_write": false, 00:17:56.435 "abort": false, 00:17:56.435 "seek_hole": false, 00:17:56.435 "seek_data": false, 00:17:56.435 "copy": false, 00:17:56.435 "nvme_iov_md": false 00:17:56.435 }, 00:17:56.435 "memory_domains": [ 00:17:56.435 { 00:17:56.435 "dma_device_id": "system", 00:17:56.435 "dma_device_type": 1 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.435 "dma_device_type": 2 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "system", 00:17:56.435 "dma_device_type": 1 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.435 "dma_device_type": 2 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "system", 00:17:56.435 "dma_device_type": 1 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.435 "dma_device_type": 2 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "system", 00:17:56.435 "dma_device_type": 1 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.435 "dma_device_type": 2 00:17:56.435 } 00:17:56.435 ], 00:17:56.435 "driver_specific": { 00:17:56.435 "raid": { 00:17:56.435 "uuid": "bc09557c-aa0d-4bf9-8803-98589398033a", 00:17:56.435 "strip_size_kb": 64, 00:17:56.435 "state": "online", 00:17:56.435 "raid_level": "concat", 00:17:56.435 "superblock": false, 00:17:56.435 "num_base_bdevs": 4, 00:17:56.435 "num_base_bdevs_discovered": 4, 00:17:56.435 "num_base_bdevs_operational": 4, 00:17:56.435 "base_bdevs_list": [ 00:17:56.435 { 00:17:56.435 "name": "NewBaseBdev", 00:17:56.435 "uuid": "de89747f-00b9-4733-8c03-95e655b33c2d", 00:17:56.435 "is_configured": true, 00:17:56.435 "data_offset": 0, 00:17:56.435 "data_size": 65536 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "name": "BaseBdev2", 00:17:56.435 "uuid": "9a3926f9-1a13-4bb6-851b-950df15b7ac3", 00:17:56.435 "is_configured": true, 00:17:56.435 "data_offset": 0, 00:17:56.435 "data_size": 65536 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "name": "BaseBdev3", 00:17:56.435 "uuid": "be9f6ff1-2915-4e8c-b9cf-ca858a1e4d14", 00:17:56.435 "is_configured": true, 00:17:56.435 "data_offset": 0, 00:17:56.435 "data_size": 65536 00:17:56.435 }, 00:17:56.435 { 00:17:56.435 "name": "BaseBdev4", 00:17:56.435 "uuid": "705d3117-743d-4902-9f42-81266483b375", 00:17:56.435 "is_configured": true, 00:17:56.435 "data_offset": 0, 00:17:56.435 "data_size": 65536 00:17:56.435 } 00:17:56.435 ] 00:17:56.435 } 00:17:56.435 } 00:17:56.435 }' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:56.435 BaseBdev2 00:17:56.435 BaseBdev3 00:17:56.435 BaseBdev4' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.435 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.694 [2024-12-09 22:59:12.450505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.694 [2024-12-09 22:59:12.450599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.694 [2024-12-09 22:59:12.450738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.694 [2024-12-09 22:59:12.450850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.694 [2024-12-09 22:59:12.450902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71860 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71860 ']' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71860 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71860 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71860' 00:17:56.694 killing process with pid 71860 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71860 00:17:56.694 [2024-12-09 22:59:12.499607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.694 22:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71860 00:17:57.263 [2024-12-09 22:59:12.957927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:58.642 00:17:58.642 real 0m12.220s 00:17:58.642 user 0m19.300s 00:17:58.642 sys 0m2.099s 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.642 ************************************ 00:17:58.642 END TEST raid_state_function_test 00:17:58.642 ************************************ 00:17:58.642 22:59:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:58.642 22:59:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:58.642 22:59:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.642 22:59:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.642 ************************************ 00:17:58.642 START TEST raid_state_function_test_sb 00:17:58.642 ************************************ 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:58.642 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72537 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72537' 00:17:58.643 Process raid pid: 72537 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72537 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72537 ']' 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.643 22:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.643 [2024-12-09 22:59:14.337591] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:17:58.643 [2024-12-09 22:59:14.337824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.902 [2024-12-09 22:59:14.516696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.902 [2024-12-09 22:59:14.651292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.162 [2024-12-09 22:59:14.890884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.162 [2024-12-09 22:59:14.891011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.422 [2024-12-09 22:59:15.268190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.422 [2024-12-09 22:59:15.268331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.422 [2024-12-09 22:59:15.268378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.422 [2024-12-09 22:59:15.268408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.422 [2024-12-09 22:59:15.268628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:59.422 [2024-12-09 22:59:15.268672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:59.422 [2024-12-09 22:59:15.268705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:59.422 [2024-12-09 22:59:15.268734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.422 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.694 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.694 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.694 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.694 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.694 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.694 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.694 "name": "Existed_Raid", 00:17:59.694 "uuid": "e388d500-ae2a-4595-abb4-ce2b366a6d98", 00:17:59.694 "strip_size_kb": 64, 00:17:59.694 "state": "configuring", 00:17:59.694 "raid_level": "concat", 00:17:59.694 "superblock": true, 00:17:59.694 "num_base_bdevs": 4, 00:17:59.694 "num_base_bdevs_discovered": 0, 00:17:59.694 "num_base_bdevs_operational": 4, 00:17:59.694 "base_bdevs_list": [ 00:17:59.694 { 00:17:59.694 "name": "BaseBdev1", 00:17:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.694 "is_configured": false, 00:17:59.694 "data_offset": 0, 00:17:59.694 "data_size": 0 00:17:59.694 }, 00:17:59.694 { 00:17:59.694 "name": "BaseBdev2", 00:17:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.694 "is_configured": false, 00:17:59.694 "data_offset": 0, 00:17:59.694 "data_size": 0 00:17:59.694 }, 00:17:59.694 { 00:17:59.694 "name": "BaseBdev3", 00:17:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.694 "is_configured": false, 00:17:59.694 "data_offset": 0, 00:17:59.694 "data_size": 0 00:17:59.694 }, 00:17:59.694 { 00:17:59.694 "name": "BaseBdev4", 00:17:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.695 "is_configured": false, 00:17:59.695 "data_offset": 0, 00:17:59.695 "data_size": 0 00:17:59.695 } 00:17:59.695 ] 00:17:59.695 }' 00:17:59.695 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.695 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:59.961 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 [2024-12-09 22:59:15.787212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.961 [2024-12-09 22:59:15.787258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:59.961 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:59.961 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.962 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.962 [2024-12-09 22:59:15.799241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.962 [2024-12-09 22:59:15.799295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.962 [2024-12-09 22:59:15.799307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.962 [2024-12-09 22:59:15.799318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.962 [2024-12-09 22:59:15.799325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:59.962 [2024-12-09 22:59:15.799336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:59.962 [2024-12-09 22:59:15.799343] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:59.962 [2024-12-09 22:59:15.799353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:59.962 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.962 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:59.962 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.962 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.222 [2024-12-09 22:59:15.853816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.222 BaseBdev1 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.222 [ 00:18:00.222 { 00:18:00.222 "name": "BaseBdev1", 00:18:00.222 "aliases": [ 00:18:00.222 "2045c26a-f1e0-4639-8460-53f8009435a3" 00:18:00.222 ], 00:18:00.222 "product_name": "Malloc disk", 00:18:00.222 "block_size": 512, 00:18:00.222 "num_blocks": 65536, 00:18:00.222 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:00.222 "assigned_rate_limits": { 00:18:00.222 "rw_ios_per_sec": 0, 00:18:00.222 "rw_mbytes_per_sec": 0, 00:18:00.222 "r_mbytes_per_sec": 0, 00:18:00.222 "w_mbytes_per_sec": 0 00:18:00.222 }, 00:18:00.222 "claimed": true, 00:18:00.222 "claim_type": "exclusive_write", 00:18:00.222 "zoned": false, 00:18:00.222 "supported_io_types": { 00:18:00.222 "read": true, 00:18:00.222 "write": true, 00:18:00.222 "unmap": true, 00:18:00.222 "flush": true, 00:18:00.222 "reset": true, 00:18:00.222 "nvme_admin": false, 00:18:00.222 "nvme_io": false, 00:18:00.222 "nvme_io_md": false, 00:18:00.222 "write_zeroes": true, 00:18:00.222 "zcopy": true, 00:18:00.222 "get_zone_info": false, 00:18:00.222 "zone_management": false, 00:18:00.222 "zone_append": false, 00:18:00.222 "compare": false, 00:18:00.222 "compare_and_write": false, 00:18:00.222 "abort": true, 00:18:00.222 "seek_hole": false, 00:18:00.222 "seek_data": false, 00:18:00.222 "copy": true, 00:18:00.222 "nvme_iov_md": false 00:18:00.222 }, 00:18:00.222 "memory_domains": [ 00:18:00.222 { 00:18:00.222 "dma_device_id": "system", 00:18:00.222 "dma_device_type": 1 00:18:00.222 }, 00:18:00.222 { 00:18:00.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.222 "dma_device_type": 2 00:18:00.222 } 00:18:00.222 ], 00:18:00.222 "driver_specific": {} 00:18:00.222 } 00:18:00.222 ] 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.222 "name": "Existed_Raid", 00:18:00.222 "uuid": "96907b8c-32fe-469a-bc9d-79a4329c9d22", 00:18:00.222 "strip_size_kb": 64, 00:18:00.222 "state": "configuring", 00:18:00.222 "raid_level": "concat", 00:18:00.222 "superblock": true, 00:18:00.222 "num_base_bdevs": 4, 00:18:00.222 "num_base_bdevs_discovered": 1, 00:18:00.222 "num_base_bdevs_operational": 4, 00:18:00.222 "base_bdevs_list": [ 00:18:00.222 { 00:18:00.222 "name": "BaseBdev1", 00:18:00.222 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:00.222 "is_configured": true, 00:18:00.222 "data_offset": 2048, 00:18:00.222 "data_size": 63488 00:18:00.222 }, 00:18:00.222 { 00:18:00.222 "name": "BaseBdev2", 00:18:00.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.222 "is_configured": false, 00:18:00.222 "data_offset": 0, 00:18:00.222 "data_size": 0 00:18:00.222 }, 00:18:00.222 { 00:18:00.222 "name": "BaseBdev3", 00:18:00.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.222 "is_configured": false, 00:18:00.222 "data_offset": 0, 00:18:00.222 "data_size": 0 00:18:00.222 }, 00:18:00.222 { 00:18:00.222 "name": "BaseBdev4", 00:18:00.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.222 "is_configured": false, 00:18:00.222 "data_offset": 0, 00:18:00.222 "data_size": 0 00:18:00.222 } 00:18:00.222 ] 00:18:00.222 }' 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.222 22:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.791 [2024-12-09 22:59:16.345069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.791 [2024-12-09 22:59:16.345138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.791 [2024-12-09 22:59:16.357135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.791 [2024-12-09 22:59:16.359310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.791 [2024-12-09 22:59:16.359361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.791 [2024-12-09 22:59:16.359374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:00.791 [2024-12-09 22:59:16.359387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:00.791 [2024-12-09 22:59:16.359396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:00.791 [2024-12-09 22:59:16.359406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.791 "name": "Existed_Raid", 00:18:00.791 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:00.791 "strip_size_kb": 64, 00:18:00.791 "state": "configuring", 00:18:00.791 "raid_level": "concat", 00:18:00.791 "superblock": true, 00:18:00.791 "num_base_bdevs": 4, 00:18:00.791 "num_base_bdevs_discovered": 1, 00:18:00.791 "num_base_bdevs_operational": 4, 00:18:00.791 "base_bdevs_list": [ 00:18:00.791 { 00:18:00.791 "name": "BaseBdev1", 00:18:00.791 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:00.791 "is_configured": true, 00:18:00.791 "data_offset": 2048, 00:18:00.791 "data_size": 63488 00:18:00.791 }, 00:18:00.791 { 00:18:00.791 "name": "BaseBdev2", 00:18:00.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.791 "is_configured": false, 00:18:00.791 "data_offset": 0, 00:18:00.791 "data_size": 0 00:18:00.791 }, 00:18:00.791 { 00:18:00.791 "name": "BaseBdev3", 00:18:00.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.791 "is_configured": false, 00:18:00.791 "data_offset": 0, 00:18:00.791 "data_size": 0 00:18:00.791 }, 00:18:00.791 { 00:18:00.791 "name": "BaseBdev4", 00:18:00.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.791 "is_configured": false, 00:18:00.791 "data_offset": 0, 00:18:00.791 "data_size": 0 00:18:00.791 } 00:18:00.791 ] 00:18:00.791 }' 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.791 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.051 [2024-12-09 22:59:16.859894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.051 BaseBdev2 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:01.051 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.052 [ 00:18:01.052 { 00:18:01.052 "name": "BaseBdev2", 00:18:01.052 "aliases": [ 00:18:01.052 "435b2c9b-526d-492b-bcd7-5f320a4a746c" 00:18:01.052 ], 00:18:01.052 "product_name": "Malloc disk", 00:18:01.052 "block_size": 512, 00:18:01.052 "num_blocks": 65536, 00:18:01.052 "uuid": "435b2c9b-526d-492b-bcd7-5f320a4a746c", 00:18:01.052 "assigned_rate_limits": { 00:18:01.052 "rw_ios_per_sec": 0, 00:18:01.052 "rw_mbytes_per_sec": 0, 00:18:01.052 "r_mbytes_per_sec": 0, 00:18:01.052 "w_mbytes_per_sec": 0 00:18:01.052 }, 00:18:01.052 "claimed": true, 00:18:01.052 "claim_type": "exclusive_write", 00:18:01.052 "zoned": false, 00:18:01.052 "supported_io_types": { 00:18:01.052 "read": true, 00:18:01.052 "write": true, 00:18:01.052 "unmap": true, 00:18:01.052 "flush": true, 00:18:01.052 "reset": true, 00:18:01.052 "nvme_admin": false, 00:18:01.052 "nvme_io": false, 00:18:01.052 "nvme_io_md": false, 00:18:01.052 "write_zeroes": true, 00:18:01.052 "zcopy": true, 00:18:01.052 "get_zone_info": false, 00:18:01.052 "zone_management": false, 00:18:01.052 "zone_append": false, 00:18:01.052 "compare": false, 00:18:01.052 "compare_and_write": false, 00:18:01.052 "abort": true, 00:18:01.052 "seek_hole": false, 00:18:01.052 "seek_data": false, 00:18:01.052 "copy": true, 00:18:01.052 "nvme_iov_md": false 00:18:01.052 }, 00:18:01.052 "memory_domains": [ 00:18:01.052 { 00:18:01.052 "dma_device_id": "system", 00:18:01.052 "dma_device_type": 1 00:18:01.052 }, 00:18:01.052 { 00:18:01.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.052 "dma_device_type": 2 00:18:01.052 } 00:18:01.052 ], 00:18:01.052 "driver_specific": {} 00:18:01.052 } 00:18:01.052 ] 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:01.052 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.312 "name": "Existed_Raid", 00:18:01.312 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:01.312 "strip_size_kb": 64, 00:18:01.312 "state": "configuring", 00:18:01.312 "raid_level": "concat", 00:18:01.312 "superblock": true, 00:18:01.312 "num_base_bdevs": 4, 00:18:01.312 "num_base_bdevs_discovered": 2, 00:18:01.312 "num_base_bdevs_operational": 4, 00:18:01.312 "base_bdevs_list": [ 00:18:01.312 { 00:18:01.312 "name": "BaseBdev1", 00:18:01.312 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:01.312 "is_configured": true, 00:18:01.312 "data_offset": 2048, 00:18:01.312 "data_size": 63488 00:18:01.312 }, 00:18:01.312 { 00:18:01.312 "name": "BaseBdev2", 00:18:01.312 "uuid": "435b2c9b-526d-492b-bcd7-5f320a4a746c", 00:18:01.312 "is_configured": true, 00:18:01.312 "data_offset": 2048, 00:18:01.312 "data_size": 63488 00:18:01.312 }, 00:18:01.312 { 00:18:01.312 "name": "BaseBdev3", 00:18:01.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.312 "is_configured": false, 00:18:01.312 "data_offset": 0, 00:18:01.312 "data_size": 0 00:18:01.312 }, 00:18:01.312 { 00:18:01.312 "name": "BaseBdev4", 00:18:01.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.312 "is_configured": false, 00:18:01.312 "data_offset": 0, 00:18:01.312 "data_size": 0 00:18:01.312 } 00:18:01.312 ] 00:18:01.312 }' 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.312 22:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.571 [2024-12-09 22:59:17.400040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.571 BaseBdev3 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.571 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.571 [ 00:18:01.571 { 00:18:01.571 "name": "BaseBdev3", 00:18:01.571 "aliases": [ 00:18:01.571 "eab81ee4-3ce0-4c23-bce0-ea4125b29eff" 00:18:01.571 ], 00:18:01.831 "product_name": "Malloc disk", 00:18:01.831 "block_size": 512, 00:18:01.831 "num_blocks": 65536, 00:18:01.831 "uuid": "eab81ee4-3ce0-4c23-bce0-ea4125b29eff", 00:18:01.831 "assigned_rate_limits": { 00:18:01.831 "rw_ios_per_sec": 0, 00:18:01.831 "rw_mbytes_per_sec": 0, 00:18:01.831 "r_mbytes_per_sec": 0, 00:18:01.831 "w_mbytes_per_sec": 0 00:18:01.831 }, 00:18:01.831 "claimed": true, 00:18:01.831 "claim_type": "exclusive_write", 00:18:01.831 "zoned": false, 00:18:01.831 "supported_io_types": { 00:18:01.831 "read": true, 00:18:01.831 "write": true, 00:18:01.831 "unmap": true, 00:18:01.831 "flush": true, 00:18:01.831 "reset": true, 00:18:01.831 "nvme_admin": false, 00:18:01.831 "nvme_io": false, 00:18:01.831 "nvme_io_md": false, 00:18:01.831 "write_zeroes": true, 00:18:01.831 "zcopy": true, 00:18:01.831 "get_zone_info": false, 00:18:01.831 "zone_management": false, 00:18:01.831 "zone_append": false, 00:18:01.831 "compare": false, 00:18:01.831 "compare_and_write": false, 00:18:01.831 "abort": true, 00:18:01.831 "seek_hole": false, 00:18:01.831 "seek_data": false, 00:18:01.831 "copy": true, 00:18:01.831 "nvme_iov_md": false 00:18:01.831 }, 00:18:01.831 "memory_domains": [ 00:18:01.831 { 00:18:01.831 "dma_device_id": "system", 00:18:01.831 "dma_device_type": 1 00:18:01.831 }, 00:18:01.831 { 00:18:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.831 "dma_device_type": 2 00:18:01.831 } 00:18:01.831 ], 00:18:01.831 "driver_specific": {} 00:18:01.831 } 00:18:01.831 ] 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.831 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.831 "name": "Existed_Raid", 00:18:01.831 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:01.831 "strip_size_kb": 64, 00:18:01.831 "state": "configuring", 00:18:01.831 "raid_level": "concat", 00:18:01.831 "superblock": true, 00:18:01.831 "num_base_bdevs": 4, 00:18:01.831 "num_base_bdevs_discovered": 3, 00:18:01.831 "num_base_bdevs_operational": 4, 00:18:01.831 "base_bdevs_list": [ 00:18:01.831 { 00:18:01.831 "name": "BaseBdev1", 00:18:01.831 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:01.831 "is_configured": true, 00:18:01.831 "data_offset": 2048, 00:18:01.831 "data_size": 63488 00:18:01.831 }, 00:18:01.831 { 00:18:01.831 "name": "BaseBdev2", 00:18:01.831 "uuid": "435b2c9b-526d-492b-bcd7-5f320a4a746c", 00:18:01.831 "is_configured": true, 00:18:01.831 "data_offset": 2048, 00:18:01.831 "data_size": 63488 00:18:01.831 }, 00:18:01.831 { 00:18:01.831 "name": "BaseBdev3", 00:18:01.831 "uuid": "eab81ee4-3ce0-4c23-bce0-ea4125b29eff", 00:18:01.831 "is_configured": true, 00:18:01.831 "data_offset": 2048, 00:18:01.831 "data_size": 63488 00:18:01.831 }, 00:18:01.831 { 00:18:01.831 "name": "BaseBdev4", 00:18:01.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.831 "is_configured": false, 00:18:01.832 "data_offset": 0, 00:18:01.832 "data_size": 0 00:18:01.832 } 00:18:01.832 ] 00:18:01.832 }' 00:18:01.832 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.832 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.092 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:02.092 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.092 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.352 [2024-12-09 22:59:17.964082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:02.352 [2024-12-09 22:59:17.964516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:02.352 [2024-12-09 22:59:17.964578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:02.352 [2024-12-09 22:59:17.964893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:02.352 [2024-12-09 22:59:17.965102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:02.352 [2024-12-09 22:59:17.965150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:02.352 BaseBdev4 00:18:02.352 [2024-12-09 22:59:17.965346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.352 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.352 22:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:02.352 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:02.352 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:02.352 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:02.352 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.353 22:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.353 [ 00:18:02.353 { 00:18:02.353 "name": "BaseBdev4", 00:18:02.353 "aliases": [ 00:18:02.353 "f0334756-bda7-4382-9146-51a3911a3a87" 00:18:02.353 ], 00:18:02.353 "product_name": "Malloc disk", 00:18:02.353 "block_size": 512, 00:18:02.353 "num_blocks": 65536, 00:18:02.353 "uuid": "f0334756-bda7-4382-9146-51a3911a3a87", 00:18:02.353 "assigned_rate_limits": { 00:18:02.353 "rw_ios_per_sec": 0, 00:18:02.353 "rw_mbytes_per_sec": 0, 00:18:02.353 "r_mbytes_per_sec": 0, 00:18:02.353 "w_mbytes_per_sec": 0 00:18:02.353 }, 00:18:02.353 "claimed": true, 00:18:02.353 "claim_type": "exclusive_write", 00:18:02.353 "zoned": false, 00:18:02.353 "supported_io_types": { 00:18:02.353 "read": true, 00:18:02.353 "write": true, 00:18:02.353 "unmap": true, 00:18:02.353 "flush": true, 00:18:02.353 "reset": true, 00:18:02.353 "nvme_admin": false, 00:18:02.353 "nvme_io": false, 00:18:02.353 "nvme_io_md": false, 00:18:02.353 "write_zeroes": true, 00:18:02.353 "zcopy": true, 00:18:02.353 "get_zone_info": false, 00:18:02.353 "zone_management": false, 00:18:02.353 "zone_append": false, 00:18:02.353 "compare": false, 00:18:02.353 "compare_and_write": false, 00:18:02.353 "abort": true, 00:18:02.353 "seek_hole": false, 00:18:02.353 "seek_data": false, 00:18:02.353 "copy": true, 00:18:02.353 "nvme_iov_md": false 00:18:02.353 }, 00:18:02.353 "memory_domains": [ 00:18:02.353 { 00:18:02.353 "dma_device_id": "system", 00:18:02.353 "dma_device_type": 1 00:18:02.353 }, 00:18:02.353 { 00:18:02.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.353 "dma_device_type": 2 00:18:02.353 } 00:18:02.353 ], 00:18:02.353 "driver_specific": {} 00:18:02.353 } 00:18:02.353 ] 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.353 "name": "Existed_Raid", 00:18:02.353 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:02.353 "strip_size_kb": 64, 00:18:02.353 "state": "online", 00:18:02.353 "raid_level": "concat", 00:18:02.353 "superblock": true, 00:18:02.353 "num_base_bdevs": 4, 00:18:02.353 "num_base_bdevs_discovered": 4, 00:18:02.353 "num_base_bdevs_operational": 4, 00:18:02.353 "base_bdevs_list": [ 00:18:02.353 { 00:18:02.353 "name": "BaseBdev1", 00:18:02.353 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:02.353 "is_configured": true, 00:18:02.353 "data_offset": 2048, 00:18:02.353 "data_size": 63488 00:18:02.353 }, 00:18:02.353 { 00:18:02.353 "name": "BaseBdev2", 00:18:02.353 "uuid": "435b2c9b-526d-492b-bcd7-5f320a4a746c", 00:18:02.353 "is_configured": true, 00:18:02.353 "data_offset": 2048, 00:18:02.353 "data_size": 63488 00:18:02.353 }, 00:18:02.353 { 00:18:02.353 "name": "BaseBdev3", 00:18:02.353 "uuid": "eab81ee4-3ce0-4c23-bce0-ea4125b29eff", 00:18:02.353 "is_configured": true, 00:18:02.353 "data_offset": 2048, 00:18:02.353 "data_size": 63488 00:18:02.353 }, 00:18:02.353 { 00:18:02.353 "name": "BaseBdev4", 00:18:02.353 "uuid": "f0334756-bda7-4382-9146-51a3911a3a87", 00:18:02.353 "is_configured": true, 00:18:02.353 "data_offset": 2048, 00:18:02.353 "data_size": 63488 00:18:02.353 } 00:18:02.353 ] 00:18:02.353 }' 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.353 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.922 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.922 [2024-12-09 22:59:18.503706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.923 "name": "Existed_Raid", 00:18:02.923 "aliases": [ 00:18:02.923 "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5" 00:18:02.923 ], 00:18:02.923 "product_name": "Raid Volume", 00:18:02.923 "block_size": 512, 00:18:02.923 "num_blocks": 253952, 00:18:02.923 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:02.923 "assigned_rate_limits": { 00:18:02.923 "rw_ios_per_sec": 0, 00:18:02.923 "rw_mbytes_per_sec": 0, 00:18:02.923 "r_mbytes_per_sec": 0, 00:18:02.923 "w_mbytes_per_sec": 0 00:18:02.923 }, 00:18:02.923 "claimed": false, 00:18:02.923 "zoned": false, 00:18:02.923 "supported_io_types": { 00:18:02.923 "read": true, 00:18:02.923 "write": true, 00:18:02.923 "unmap": true, 00:18:02.923 "flush": true, 00:18:02.923 "reset": true, 00:18:02.923 "nvme_admin": false, 00:18:02.923 "nvme_io": false, 00:18:02.923 "nvme_io_md": false, 00:18:02.923 "write_zeroes": true, 00:18:02.923 "zcopy": false, 00:18:02.923 "get_zone_info": false, 00:18:02.923 "zone_management": false, 00:18:02.923 "zone_append": false, 00:18:02.923 "compare": false, 00:18:02.923 "compare_and_write": false, 00:18:02.923 "abort": false, 00:18:02.923 "seek_hole": false, 00:18:02.923 "seek_data": false, 00:18:02.923 "copy": false, 00:18:02.923 "nvme_iov_md": false 00:18:02.923 }, 00:18:02.923 "memory_domains": [ 00:18:02.923 { 00:18:02.923 "dma_device_id": "system", 00:18:02.923 "dma_device_type": 1 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.923 "dma_device_type": 2 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "system", 00:18:02.923 "dma_device_type": 1 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.923 "dma_device_type": 2 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "system", 00:18:02.923 "dma_device_type": 1 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.923 "dma_device_type": 2 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "system", 00:18:02.923 "dma_device_type": 1 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.923 "dma_device_type": 2 00:18:02.923 } 00:18:02.923 ], 00:18:02.923 "driver_specific": { 00:18:02.923 "raid": { 00:18:02.923 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:02.923 "strip_size_kb": 64, 00:18:02.923 "state": "online", 00:18:02.923 "raid_level": "concat", 00:18:02.923 "superblock": true, 00:18:02.923 "num_base_bdevs": 4, 00:18:02.923 "num_base_bdevs_discovered": 4, 00:18:02.923 "num_base_bdevs_operational": 4, 00:18:02.923 "base_bdevs_list": [ 00:18:02.923 { 00:18:02.923 "name": "BaseBdev1", 00:18:02.923 "uuid": "2045c26a-f1e0-4639-8460-53f8009435a3", 00:18:02.923 "is_configured": true, 00:18:02.923 "data_offset": 2048, 00:18:02.923 "data_size": 63488 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "name": "BaseBdev2", 00:18:02.923 "uuid": "435b2c9b-526d-492b-bcd7-5f320a4a746c", 00:18:02.923 "is_configured": true, 00:18:02.923 "data_offset": 2048, 00:18:02.923 "data_size": 63488 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "name": "BaseBdev3", 00:18:02.923 "uuid": "eab81ee4-3ce0-4c23-bce0-ea4125b29eff", 00:18:02.923 "is_configured": true, 00:18:02.923 "data_offset": 2048, 00:18:02.923 "data_size": 63488 00:18:02.923 }, 00:18:02.923 { 00:18:02.923 "name": "BaseBdev4", 00:18:02.923 "uuid": "f0334756-bda7-4382-9146-51a3911a3a87", 00:18:02.923 "is_configured": true, 00:18:02.923 "data_offset": 2048, 00:18:02.923 "data_size": 63488 00:18:02.923 } 00:18:02.923 ] 00:18:02.923 } 00:18:02.923 } 00:18:02.923 }' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:02.923 BaseBdev2 00:18:02.923 BaseBdev3 00:18:02.923 BaseBdev4' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.923 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.183 [2024-12-09 22:59:18.790950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.183 [2024-12-09 22:59:18.791011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.183 [2024-12-09 22:59:18.791073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.183 "name": "Existed_Raid", 00:18:03.183 "uuid": "6bdc49dc-7a8a-4da1-aeed-7d8e9c815da5", 00:18:03.183 "strip_size_kb": 64, 00:18:03.183 "state": "offline", 00:18:03.183 "raid_level": "concat", 00:18:03.183 "superblock": true, 00:18:03.183 "num_base_bdevs": 4, 00:18:03.183 "num_base_bdevs_discovered": 3, 00:18:03.183 "num_base_bdevs_operational": 3, 00:18:03.183 "base_bdevs_list": [ 00:18:03.183 { 00:18:03.183 "name": null, 00:18:03.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.183 "is_configured": false, 00:18:03.183 "data_offset": 0, 00:18:03.183 "data_size": 63488 00:18:03.183 }, 00:18:03.183 { 00:18:03.183 "name": "BaseBdev2", 00:18:03.183 "uuid": "435b2c9b-526d-492b-bcd7-5f320a4a746c", 00:18:03.183 "is_configured": true, 00:18:03.183 "data_offset": 2048, 00:18:03.183 "data_size": 63488 00:18:03.183 }, 00:18:03.183 { 00:18:03.183 "name": "BaseBdev3", 00:18:03.183 "uuid": "eab81ee4-3ce0-4c23-bce0-ea4125b29eff", 00:18:03.183 "is_configured": true, 00:18:03.183 "data_offset": 2048, 00:18:03.183 "data_size": 63488 00:18:03.183 }, 00:18:03.183 { 00:18:03.183 "name": "BaseBdev4", 00:18:03.183 "uuid": "f0334756-bda7-4382-9146-51a3911a3a87", 00:18:03.183 "is_configured": true, 00:18:03.183 "data_offset": 2048, 00:18:03.183 "data_size": 63488 00:18:03.183 } 00:18:03.183 ] 00:18:03.183 }' 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.183 22:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.764 [2024-12-09 22:59:19.416404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.764 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.764 [2024-12-09 22:59:19.588041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.024 [2024-12-09 22:59:19.754118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:04.024 [2024-12-09 22:59:19.754240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.024 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.284 BaseBdev2 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.284 22:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.284 [ 00:18:04.284 { 00:18:04.284 "name": "BaseBdev2", 00:18:04.284 "aliases": [ 00:18:04.284 "3ba8850b-9070-472b-949d-b8b826f1e351" 00:18:04.284 ], 00:18:04.284 "product_name": "Malloc disk", 00:18:04.284 "block_size": 512, 00:18:04.284 "num_blocks": 65536, 00:18:04.284 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:04.284 "assigned_rate_limits": { 00:18:04.284 "rw_ios_per_sec": 0, 00:18:04.284 "rw_mbytes_per_sec": 0, 00:18:04.284 "r_mbytes_per_sec": 0, 00:18:04.284 "w_mbytes_per_sec": 0 00:18:04.284 }, 00:18:04.284 "claimed": false, 00:18:04.284 "zoned": false, 00:18:04.284 "supported_io_types": { 00:18:04.284 "read": true, 00:18:04.284 "write": true, 00:18:04.284 "unmap": true, 00:18:04.284 "flush": true, 00:18:04.284 "reset": true, 00:18:04.284 "nvme_admin": false, 00:18:04.284 "nvme_io": false, 00:18:04.284 "nvme_io_md": false, 00:18:04.284 "write_zeroes": true, 00:18:04.284 "zcopy": true, 00:18:04.284 "get_zone_info": false, 00:18:04.284 "zone_management": false, 00:18:04.284 "zone_append": false, 00:18:04.284 "compare": false, 00:18:04.284 "compare_and_write": false, 00:18:04.284 "abort": true, 00:18:04.284 "seek_hole": false, 00:18:04.284 "seek_data": false, 00:18:04.284 "copy": true, 00:18:04.284 "nvme_iov_md": false 00:18:04.284 }, 00:18:04.284 "memory_domains": [ 00:18:04.284 { 00:18:04.284 "dma_device_id": "system", 00:18:04.284 "dma_device_type": 1 00:18:04.284 }, 00:18:04.284 { 00:18:04.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.284 "dma_device_type": 2 00:18:04.284 } 00:18:04.284 ], 00:18:04.284 "driver_specific": {} 00:18:04.284 } 00:18:04.284 ] 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.284 BaseBdev3 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.284 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.285 [ 00:18:04.285 { 00:18:04.285 "name": "BaseBdev3", 00:18:04.285 "aliases": [ 00:18:04.285 "1f192455-dbd4-4126-8f40-f7935d3369ff" 00:18:04.285 ], 00:18:04.285 "product_name": "Malloc disk", 00:18:04.285 "block_size": 512, 00:18:04.285 "num_blocks": 65536, 00:18:04.285 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:04.285 "assigned_rate_limits": { 00:18:04.285 "rw_ios_per_sec": 0, 00:18:04.285 "rw_mbytes_per_sec": 0, 00:18:04.285 "r_mbytes_per_sec": 0, 00:18:04.285 "w_mbytes_per_sec": 0 00:18:04.285 }, 00:18:04.285 "claimed": false, 00:18:04.285 "zoned": false, 00:18:04.285 "supported_io_types": { 00:18:04.285 "read": true, 00:18:04.285 "write": true, 00:18:04.285 "unmap": true, 00:18:04.285 "flush": true, 00:18:04.285 "reset": true, 00:18:04.285 "nvme_admin": false, 00:18:04.285 "nvme_io": false, 00:18:04.285 "nvme_io_md": false, 00:18:04.285 "write_zeroes": true, 00:18:04.285 "zcopy": true, 00:18:04.285 "get_zone_info": false, 00:18:04.285 "zone_management": false, 00:18:04.285 "zone_append": false, 00:18:04.285 "compare": false, 00:18:04.285 "compare_and_write": false, 00:18:04.285 "abort": true, 00:18:04.285 "seek_hole": false, 00:18:04.285 "seek_data": false, 00:18:04.285 "copy": true, 00:18:04.285 "nvme_iov_md": false 00:18:04.285 }, 00:18:04.285 "memory_domains": [ 00:18:04.285 { 00:18:04.285 "dma_device_id": "system", 00:18:04.285 "dma_device_type": 1 00:18:04.285 }, 00:18:04.285 { 00:18:04.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.285 "dma_device_type": 2 00:18:04.285 } 00:18:04.285 ], 00:18:04.285 "driver_specific": {} 00:18:04.285 } 00:18:04.285 ] 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.285 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.545 BaseBdev4 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.545 [ 00:18:04.545 { 00:18:04.545 "name": "BaseBdev4", 00:18:04.545 "aliases": [ 00:18:04.545 "18de6c7f-0c9b-49cc-a76e-a5923737f15b" 00:18:04.545 ], 00:18:04.545 "product_name": "Malloc disk", 00:18:04.545 "block_size": 512, 00:18:04.545 "num_blocks": 65536, 00:18:04.545 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:04.545 "assigned_rate_limits": { 00:18:04.545 "rw_ios_per_sec": 0, 00:18:04.545 "rw_mbytes_per_sec": 0, 00:18:04.545 "r_mbytes_per_sec": 0, 00:18:04.545 "w_mbytes_per_sec": 0 00:18:04.545 }, 00:18:04.545 "claimed": false, 00:18:04.545 "zoned": false, 00:18:04.545 "supported_io_types": { 00:18:04.545 "read": true, 00:18:04.545 "write": true, 00:18:04.545 "unmap": true, 00:18:04.545 "flush": true, 00:18:04.545 "reset": true, 00:18:04.545 "nvme_admin": false, 00:18:04.545 "nvme_io": false, 00:18:04.545 "nvme_io_md": false, 00:18:04.545 "write_zeroes": true, 00:18:04.545 "zcopy": true, 00:18:04.545 "get_zone_info": false, 00:18:04.545 "zone_management": false, 00:18:04.545 "zone_append": false, 00:18:04.545 "compare": false, 00:18:04.545 "compare_and_write": false, 00:18:04.545 "abort": true, 00:18:04.545 "seek_hole": false, 00:18:04.545 "seek_data": false, 00:18:04.545 "copy": true, 00:18:04.545 "nvme_iov_md": false 00:18:04.545 }, 00:18:04.545 "memory_domains": [ 00:18:04.545 { 00:18:04.545 "dma_device_id": "system", 00:18:04.545 "dma_device_type": 1 00:18:04.545 }, 00:18:04.545 { 00:18:04.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.545 "dma_device_type": 2 00:18:04.545 } 00:18:04.545 ], 00:18:04.545 "driver_specific": {} 00:18:04.545 } 00:18:04.545 ] 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:04.545 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.546 [2024-12-09 22:59:20.190097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.546 [2024-12-09 22:59:20.190217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.546 [2024-12-09 22:59:20.190279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.546 [2024-12-09 22:59:20.192411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.546 [2024-12-09 22:59:20.192554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.546 "name": "Existed_Raid", 00:18:04.546 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:04.546 "strip_size_kb": 64, 00:18:04.546 "state": "configuring", 00:18:04.546 "raid_level": "concat", 00:18:04.546 "superblock": true, 00:18:04.546 "num_base_bdevs": 4, 00:18:04.546 "num_base_bdevs_discovered": 3, 00:18:04.546 "num_base_bdevs_operational": 4, 00:18:04.546 "base_bdevs_list": [ 00:18:04.546 { 00:18:04.546 "name": "BaseBdev1", 00:18:04.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.546 "is_configured": false, 00:18:04.546 "data_offset": 0, 00:18:04.546 "data_size": 0 00:18:04.546 }, 00:18:04.546 { 00:18:04.546 "name": "BaseBdev2", 00:18:04.546 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:04.546 "is_configured": true, 00:18:04.546 "data_offset": 2048, 00:18:04.546 "data_size": 63488 00:18:04.546 }, 00:18:04.546 { 00:18:04.546 "name": "BaseBdev3", 00:18:04.546 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:04.546 "is_configured": true, 00:18:04.546 "data_offset": 2048, 00:18:04.546 "data_size": 63488 00:18:04.546 }, 00:18:04.546 { 00:18:04.546 "name": "BaseBdev4", 00:18:04.546 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:04.546 "is_configured": true, 00:18:04.546 "data_offset": 2048, 00:18:04.546 "data_size": 63488 00:18:04.546 } 00:18:04.546 ] 00:18:04.546 }' 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.546 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.115 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:05.115 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.115 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.115 [2024-12-09 22:59:20.701323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.116 "name": "Existed_Raid", 00:18:05.116 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:05.116 "strip_size_kb": 64, 00:18:05.116 "state": "configuring", 00:18:05.116 "raid_level": "concat", 00:18:05.116 "superblock": true, 00:18:05.116 "num_base_bdevs": 4, 00:18:05.116 "num_base_bdevs_discovered": 2, 00:18:05.116 "num_base_bdevs_operational": 4, 00:18:05.116 "base_bdevs_list": [ 00:18:05.116 { 00:18:05.116 "name": "BaseBdev1", 00:18:05.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.116 "is_configured": false, 00:18:05.116 "data_offset": 0, 00:18:05.116 "data_size": 0 00:18:05.116 }, 00:18:05.116 { 00:18:05.116 "name": null, 00:18:05.116 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:05.116 "is_configured": false, 00:18:05.116 "data_offset": 0, 00:18:05.116 "data_size": 63488 00:18:05.116 }, 00:18:05.116 { 00:18:05.116 "name": "BaseBdev3", 00:18:05.116 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:05.116 "is_configured": true, 00:18:05.116 "data_offset": 2048, 00:18:05.116 "data_size": 63488 00:18:05.116 }, 00:18:05.116 { 00:18:05.116 "name": "BaseBdev4", 00:18:05.116 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:05.116 "is_configured": true, 00:18:05.116 "data_offset": 2048, 00:18:05.116 "data_size": 63488 00:18:05.116 } 00:18:05.116 ] 00:18:05.116 }' 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.116 22:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.375 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:05.375 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.375 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.375 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.635 [2024-12-09 22:59:21.288276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.635 BaseBdev1 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.635 [ 00:18:05.635 { 00:18:05.635 "name": "BaseBdev1", 00:18:05.635 "aliases": [ 00:18:05.635 "ed90f16e-9378-4020-9bf3-bf9c00d3b64e" 00:18:05.635 ], 00:18:05.635 "product_name": "Malloc disk", 00:18:05.635 "block_size": 512, 00:18:05.635 "num_blocks": 65536, 00:18:05.635 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:05.635 "assigned_rate_limits": { 00:18:05.635 "rw_ios_per_sec": 0, 00:18:05.635 "rw_mbytes_per_sec": 0, 00:18:05.635 "r_mbytes_per_sec": 0, 00:18:05.635 "w_mbytes_per_sec": 0 00:18:05.635 }, 00:18:05.635 "claimed": true, 00:18:05.635 "claim_type": "exclusive_write", 00:18:05.635 "zoned": false, 00:18:05.635 "supported_io_types": { 00:18:05.635 "read": true, 00:18:05.635 "write": true, 00:18:05.635 "unmap": true, 00:18:05.635 "flush": true, 00:18:05.635 "reset": true, 00:18:05.635 "nvme_admin": false, 00:18:05.635 "nvme_io": false, 00:18:05.635 "nvme_io_md": false, 00:18:05.635 "write_zeroes": true, 00:18:05.635 "zcopy": true, 00:18:05.635 "get_zone_info": false, 00:18:05.635 "zone_management": false, 00:18:05.635 "zone_append": false, 00:18:05.635 "compare": false, 00:18:05.635 "compare_and_write": false, 00:18:05.635 "abort": true, 00:18:05.635 "seek_hole": false, 00:18:05.635 "seek_data": false, 00:18:05.635 "copy": true, 00:18:05.635 "nvme_iov_md": false 00:18:05.635 }, 00:18:05.635 "memory_domains": [ 00:18:05.635 { 00:18:05.635 "dma_device_id": "system", 00:18:05.635 "dma_device_type": 1 00:18:05.635 }, 00:18:05.635 { 00:18:05.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.635 "dma_device_type": 2 00:18:05.635 } 00:18:05.635 ], 00:18:05.635 "driver_specific": {} 00:18:05.635 } 00:18:05.635 ] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.635 "name": "Existed_Raid", 00:18:05.635 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:05.635 "strip_size_kb": 64, 00:18:05.635 "state": "configuring", 00:18:05.635 "raid_level": "concat", 00:18:05.635 "superblock": true, 00:18:05.635 "num_base_bdevs": 4, 00:18:05.635 "num_base_bdevs_discovered": 3, 00:18:05.635 "num_base_bdevs_operational": 4, 00:18:05.635 "base_bdevs_list": [ 00:18:05.635 { 00:18:05.635 "name": "BaseBdev1", 00:18:05.635 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:05.635 "is_configured": true, 00:18:05.635 "data_offset": 2048, 00:18:05.635 "data_size": 63488 00:18:05.635 }, 00:18:05.635 { 00:18:05.635 "name": null, 00:18:05.635 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:05.635 "is_configured": false, 00:18:05.635 "data_offset": 0, 00:18:05.635 "data_size": 63488 00:18:05.635 }, 00:18:05.635 { 00:18:05.635 "name": "BaseBdev3", 00:18:05.635 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:05.635 "is_configured": true, 00:18:05.635 "data_offset": 2048, 00:18:05.635 "data_size": 63488 00:18:05.635 }, 00:18:05.635 { 00:18:05.635 "name": "BaseBdev4", 00:18:05.635 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:05.635 "is_configured": true, 00:18:05.635 "data_offset": 2048, 00:18:05.635 "data_size": 63488 00:18:05.635 } 00:18:05.635 ] 00:18:05.635 }' 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.635 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.206 [2024-12-09 22:59:21.855444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.206 "name": "Existed_Raid", 00:18:06.206 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:06.206 "strip_size_kb": 64, 00:18:06.206 "state": "configuring", 00:18:06.206 "raid_level": "concat", 00:18:06.206 "superblock": true, 00:18:06.206 "num_base_bdevs": 4, 00:18:06.206 "num_base_bdevs_discovered": 2, 00:18:06.206 "num_base_bdevs_operational": 4, 00:18:06.206 "base_bdevs_list": [ 00:18:06.206 { 00:18:06.206 "name": "BaseBdev1", 00:18:06.206 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:06.206 "is_configured": true, 00:18:06.206 "data_offset": 2048, 00:18:06.206 "data_size": 63488 00:18:06.206 }, 00:18:06.206 { 00:18:06.206 "name": null, 00:18:06.206 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:06.206 "is_configured": false, 00:18:06.206 "data_offset": 0, 00:18:06.206 "data_size": 63488 00:18:06.206 }, 00:18:06.206 { 00:18:06.206 "name": null, 00:18:06.206 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:06.206 "is_configured": false, 00:18:06.206 "data_offset": 0, 00:18:06.206 "data_size": 63488 00:18:06.206 }, 00:18:06.206 { 00:18:06.206 "name": "BaseBdev4", 00:18:06.206 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:06.206 "is_configured": true, 00:18:06.206 "data_offset": 2048, 00:18:06.206 "data_size": 63488 00:18:06.206 } 00:18:06.206 ] 00:18:06.206 }' 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.206 22:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.776 [2024-12-09 22:59:22.362598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.776 "name": "Existed_Raid", 00:18:06.776 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:06.776 "strip_size_kb": 64, 00:18:06.776 "state": "configuring", 00:18:06.776 "raid_level": "concat", 00:18:06.776 "superblock": true, 00:18:06.776 "num_base_bdevs": 4, 00:18:06.776 "num_base_bdevs_discovered": 3, 00:18:06.776 "num_base_bdevs_operational": 4, 00:18:06.776 "base_bdevs_list": [ 00:18:06.776 { 00:18:06.776 "name": "BaseBdev1", 00:18:06.776 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:06.776 "is_configured": true, 00:18:06.776 "data_offset": 2048, 00:18:06.776 "data_size": 63488 00:18:06.776 }, 00:18:06.776 { 00:18:06.776 "name": null, 00:18:06.776 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:06.776 "is_configured": false, 00:18:06.776 "data_offset": 0, 00:18:06.776 "data_size": 63488 00:18:06.776 }, 00:18:06.776 { 00:18:06.776 "name": "BaseBdev3", 00:18:06.776 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:06.776 "is_configured": true, 00:18:06.776 "data_offset": 2048, 00:18:06.776 "data_size": 63488 00:18:06.776 }, 00:18:06.776 { 00:18:06.776 "name": "BaseBdev4", 00:18:06.776 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:06.776 "is_configured": true, 00:18:06.776 "data_offset": 2048, 00:18:06.776 "data_size": 63488 00:18:06.776 } 00:18:06.776 ] 00:18:06.776 }' 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.776 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.035 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.035 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.036 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.036 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:07.036 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.296 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:07.296 22:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:07.296 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.296 22:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.296 [2024-12-09 22:59:22.917730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.296 "name": "Existed_Raid", 00:18:07.296 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:07.296 "strip_size_kb": 64, 00:18:07.296 "state": "configuring", 00:18:07.296 "raid_level": "concat", 00:18:07.296 "superblock": true, 00:18:07.296 "num_base_bdevs": 4, 00:18:07.296 "num_base_bdevs_discovered": 2, 00:18:07.296 "num_base_bdevs_operational": 4, 00:18:07.296 "base_bdevs_list": [ 00:18:07.296 { 00:18:07.296 "name": null, 00:18:07.296 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:07.296 "is_configured": false, 00:18:07.296 "data_offset": 0, 00:18:07.296 "data_size": 63488 00:18:07.296 }, 00:18:07.296 { 00:18:07.296 "name": null, 00:18:07.296 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:07.296 "is_configured": false, 00:18:07.296 "data_offset": 0, 00:18:07.296 "data_size": 63488 00:18:07.296 }, 00:18:07.296 { 00:18:07.296 "name": "BaseBdev3", 00:18:07.296 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:07.296 "is_configured": true, 00:18:07.296 "data_offset": 2048, 00:18:07.296 "data_size": 63488 00:18:07.296 }, 00:18:07.296 { 00:18:07.296 "name": "BaseBdev4", 00:18:07.296 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:07.296 "is_configured": true, 00:18:07.296 "data_offset": 2048, 00:18:07.296 "data_size": 63488 00:18:07.296 } 00:18:07.296 ] 00:18:07.296 }' 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.296 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.864 [2024-12-09 22:59:23.489562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.864 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.864 "name": "Existed_Raid", 00:18:07.864 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:07.864 "strip_size_kb": 64, 00:18:07.864 "state": "configuring", 00:18:07.864 "raid_level": "concat", 00:18:07.864 "superblock": true, 00:18:07.864 "num_base_bdevs": 4, 00:18:07.864 "num_base_bdevs_discovered": 3, 00:18:07.864 "num_base_bdevs_operational": 4, 00:18:07.864 "base_bdevs_list": [ 00:18:07.864 { 00:18:07.864 "name": null, 00:18:07.864 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:07.864 "is_configured": false, 00:18:07.864 "data_offset": 0, 00:18:07.864 "data_size": 63488 00:18:07.864 }, 00:18:07.864 { 00:18:07.864 "name": "BaseBdev2", 00:18:07.864 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:07.864 "is_configured": true, 00:18:07.864 "data_offset": 2048, 00:18:07.864 "data_size": 63488 00:18:07.864 }, 00:18:07.864 { 00:18:07.865 "name": "BaseBdev3", 00:18:07.865 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:07.865 "is_configured": true, 00:18:07.865 "data_offset": 2048, 00:18:07.865 "data_size": 63488 00:18:07.865 }, 00:18:07.865 { 00:18:07.865 "name": "BaseBdev4", 00:18:07.865 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:07.865 "is_configured": true, 00:18:07.865 "data_offset": 2048, 00:18:07.865 "data_size": 63488 00:18:07.865 } 00:18:07.865 ] 00:18:07.865 }' 00:18:07.865 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.865 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.124 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.124 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:08.124 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.124 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.124 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.384 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:08.384 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.384 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.384 22:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.384 22:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:08.384 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.384 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed90f16e-9378-4020-9bf3-bf9c00d3b64e 00:18:08.384 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.384 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.384 [2024-12-09 22:59:24.087388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:08.384 [2024-12-09 22:59:24.087830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:08.384 [2024-12-09 22:59:24.087887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:08.385 [2024-12-09 22:59:24.088214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:08.385 [2024-12-09 22:59:24.088421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:08.385 [2024-12-09 22:59:24.088496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:08.385 NewBaseBdev 00:18:08.385 [2024-12-09 22:59:24.088686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.385 [ 00:18:08.385 { 00:18:08.385 "name": "NewBaseBdev", 00:18:08.385 "aliases": [ 00:18:08.385 "ed90f16e-9378-4020-9bf3-bf9c00d3b64e" 00:18:08.385 ], 00:18:08.385 "product_name": "Malloc disk", 00:18:08.385 "block_size": 512, 00:18:08.385 "num_blocks": 65536, 00:18:08.385 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:08.385 "assigned_rate_limits": { 00:18:08.385 "rw_ios_per_sec": 0, 00:18:08.385 "rw_mbytes_per_sec": 0, 00:18:08.385 "r_mbytes_per_sec": 0, 00:18:08.385 "w_mbytes_per_sec": 0 00:18:08.385 }, 00:18:08.385 "claimed": true, 00:18:08.385 "claim_type": "exclusive_write", 00:18:08.385 "zoned": false, 00:18:08.385 "supported_io_types": { 00:18:08.385 "read": true, 00:18:08.385 "write": true, 00:18:08.385 "unmap": true, 00:18:08.385 "flush": true, 00:18:08.385 "reset": true, 00:18:08.385 "nvme_admin": false, 00:18:08.385 "nvme_io": false, 00:18:08.385 "nvme_io_md": false, 00:18:08.385 "write_zeroes": true, 00:18:08.385 "zcopy": true, 00:18:08.385 "get_zone_info": false, 00:18:08.385 "zone_management": false, 00:18:08.385 "zone_append": false, 00:18:08.385 "compare": false, 00:18:08.385 "compare_and_write": false, 00:18:08.385 "abort": true, 00:18:08.385 "seek_hole": false, 00:18:08.385 "seek_data": false, 00:18:08.385 "copy": true, 00:18:08.385 "nvme_iov_md": false 00:18:08.385 }, 00:18:08.385 "memory_domains": [ 00:18:08.385 { 00:18:08.385 "dma_device_id": "system", 00:18:08.385 "dma_device_type": 1 00:18:08.385 }, 00:18:08.385 { 00:18:08.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.385 "dma_device_type": 2 00:18:08.385 } 00:18:08.385 ], 00:18:08.385 "driver_specific": {} 00:18:08.385 } 00:18:08.385 ] 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.385 "name": "Existed_Raid", 00:18:08.385 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:08.385 "strip_size_kb": 64, 00:18:08.385 "state": "online", 00:18:08.385 "raid_level": "concat", 00:18:08.385 "superblock": true, 00:18:08.385 "num_base_bdevs": 4, 00:18:08.385 "num_base_bdevs_discovered": 4, 00:18:08.385 "num_base_bdevs_operational": 4, 00:18:08.385 "base_bdevs_list": [ 00:18:08.385 { 00:18:08.385 "name": "NewBaseBdev", 00:18:08.385 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:08.385 "is_configured": true, 00:18:08.385 "data_offset": 2048, 00:18:08.385 "data_size": 63488 00:18:08.385 }, 00:18:08.385 { 00:18:08.385 "name": "BaseBdev2", 00:18:08.385 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:08.385 "is_configured": true, 00:18:08.385 "data_offset": 2048, 00:18:08.385 "data_size": 63488 00:18:08.385 }, 00:18:08.385 { 00:18:08.385 "name": "BaseBdev3", 00:18:08.385 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:08.385 "is_configured": true, 00:18:08.385 "data_offset": 2048, 00:18:08.385 "data_size": 63488 00:18:08.385 }, 00:18:08.385 { 00:18:08.385 "name": "BaseBdev4", 00:18:08.385 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:08.385 "is_configured": true, 00:18:08.385 "data_offset": 2048, 00:18:08.385 "data_size": 63488 00:18:08.385 } 00:18:08.385 ] 00:18:08.385 }' 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.385 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:08.955 [2024-12-09 22:59:24.627002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.955 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:08.955 "name": "Existed_Raid", 00:18:08.955 "aliases": [ 00:18:08.955 "beb7785f-6468-4b68-9b23-cddcf89090f6" 00:18:08.955 ], 00:18:08.955 "product_name": "Raid Volume", 00:18:08.955 "block_size": 512, 00:18:08.955 "num_blocks": 253952, 00:18:08.955 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:08.955 "assigned_rate_limits": { 00:18:08.955 "rw_ios_per_sec": 0, 00:18:08.955 "rw_mbytes_per_sec": 0, 00:18:08.955 "r_mbytes_per_sec": 0, 00:18:08.955 "w_mbytes_per_sec": 0 00:18:08.955 }, 00:18:08.955 "claimed": false, 00:18:08.955 "zoned": false, 00:18:08.955 "supported_io_types": { 00:18:08.955 "read": true, 00:18:08.955 "write": true, 00:18:08.955 "unmap": true, 00:18:08.955 "flush": true, 00:18:08.955 "reset": true, 00:18:08.955 "nvme_admin": false, 00:18:08.955 "nvme_io": false, 00:18:08.955 "nvme_io_md": false, 00:18:08.955 "write_zeroes": true, 00:18:08.955 "zcopy": false, 00:18:08.955 "get_zone_info": false, 00:18:08.955 "zone_management": false, 00:18:08.955 "zone_append": false, 00:18:08.955 "compare": false, 00:18:08.955 "compare_and_write": false, 00:18:08.955 "abort": false, 00:18:08.955 "seek_hole": false, 00:18:08.955 "seek_data": false, 00:18:08.955 "copy": false, 00:18:08.955 "nvme_iov_md": false 00:18:08.955 }, 00:18:08.955 "memory_domains": [ 00:18:08.955 { 00:18:08.955 "dma_device_id": "system", 00:18:08.955 "dma_device_type": 1 00:18:08.955 }, 00:18:08.955 { 00:18:08.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.955 "dma_device_type": 2 00:18:08.955 }, 00:18:08.955 { 00:18:08.955 "dma_device_id": "system", 00:18:08.955 "dma_device_type": 1 00:18:08.955 }, 00:18:08.955 { 00:18:08.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.956 "dma_device_type": 2 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "dma_device_id": "system", 00:18:08.956 "dma_device_type": 1 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.956 "dma_device_type": 2 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "dma_device_id": "system", 00:18:08.956 "dma_device_type": 1 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.956 "dma_device_type": 2 00:18:08.956 } 00:18:08.956 ], 00:18:08.956 "driver_specific": { 00:18:08.956 "raid": { 00:18:08.956 "uuid": "beb7785f-6468-4b68-9b23-cddcf89090f6", 00:18:08.956 "strip_size_kb": 64, 00:18:08.956 "state": "online", 00:18:08.956 "raid_level": "concat", 00:18:08.956 "superblock": true, 00:18:08.956 "num_base_bdevs": 4, 00:18:08.956 "num_base_bdevs_discovered": 4, 00:18:08.956 "num_base_bdevs_operational": 4, 00:18:08.956 "base_bdevs_list": [ 00:18:08.956 { 00:18:08.956 "name": "NewBaseBdev", 00:18:08.956 "uuid": "ed90f16e-9378-4020-9bf3-bf9c00d3b64e", 00:18:08.956 "is_configured": true, 00:18:08.956 "data_offset": 2048, 00:18:08.956 "data_size": 63488 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "name": "BaseBdev2", 00:18:08.956 "uuid": "3ba8850b-9070-472b-949d-b8b826f1e351", 00:18:08.956 "is_configured": true, 00:18:08.956 "data_offset": 2048, 00:18:08.956 "data_size": 63488 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "name": "BaseBdev3", 00:18:08.956 "uuid": "1f192455-dbd4-4126-8f40-f7935d3369ff", 00:18:08.956 "is_configured": true, 00:18:08.956 "data_offset": 2048, 00:18:08.956 "data_size": 63488 00:18:08.956 }, 00:18:08.956 { 00:18:08.956 "name": "BaseBdev4", 00:18:08.956 "uuid": "18de6c7f-0c9b-49cc-a76e-a5923737f15b", 00:18:08.956 "is_configured": true, 00:18:08.956 "data_offset": 2048, 00:18:08.956 "data_size": 63488 00:18:08.956 } 00:18:08.956 ] 00:18:08.956 } 00:18:08.956 } 00:18:08.956 }' 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:08.956 BaseBdev2 00:18:08.956 BaseBdev3 00:18:08.956 BaseBdev4' 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.956 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.215 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.216 [2024-12-09 22:59:24.981965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.216 [2024-12-09 22:59:24.982060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.216 [2024-12-09 22:59:24.982188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.216 [2024-12-09 22:59:24.982273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.216 [2024-12-09 22:59:24.982286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72537 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72537 ']' 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72537 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.216 22:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72537 00:18:09.216 killing process with pid 72537 00:18:09.216 22:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.216 22:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.216 22:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72537' 00:18:09.216 22:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72537 00:18:09.216 [2024-12-09 22:59:25.029047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.216 22:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72537 00:18:09.783 [2024-12-09 22:59:25.494720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.165 22:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:11.165 ************************************ 00:18:11.165 00:18:11.165 real 0m12.567s 00:18:11.165 user 0m19.799s 00:18:11.165 sys 0m2.221s 00:18:11.165 22:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.165 22:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.165 END TEST raid_state_function_test_sb 00:18:11.165 ************************************ 00:18:11.165 22:59:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:11.165 22:59:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:11.165 22:59:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.165 22:59:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.165 ************************************ 00:18:11.165 START TEST raid_superblock_test 00:18:11.165 ************************************ 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73218 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73218 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73218 ']' 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.165 22:59:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.165 [2024-12-09 22:59:26.981705] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:18:11.165 [2024-12-09 22:59:26.981928] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:18:11.425 [2024-12-09 22:59:27.159017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.684 [2024-12-09 22:59:27.286909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.684 [2024-12-09 22:59:27.505738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.684 [2024-12-09 22:59:27.505889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.253 malloc1 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.253 [2024-12-09 22:59:27.918807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:12.253 [2024-12-09 22:59:27.918934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.253 [2024-12-09 22:59:27.918983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.253 [2024-12-09 22:59:27.919022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.253 [2024-12-09 22:59:27.921548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.253 [2024-12-09 22:59:27.921631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:12.253 pt1 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.253 malloc2 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.253 [2024-12-09 22:59:27.983807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.253 [2024-12-09 22:59:27.983875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.253 [2024-12-09 22:59:27.983903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.253 [2024-12-09 22:59:27.983914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.253 [2024-12-09 22:59:27.986394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.253 [2024-12-09 22:59:27.986436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.253 pt2 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.253 malloc3 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.253 [2024-12-09 22:59:28.062968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.253 [2024-12-09 22:59:28.063072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.253 [2024-12-09 22:59:28.063118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:12.253 [2024-12-09 22:59:28.063151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.253 [2024-12-09 22:59:28.065622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.253 [2024-12-09 22:59:28.065716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.253 pt3 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.253 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.513 malloc4 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.513 [2024-12-09 22:59:28.127436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:12.513 [2024-12-09 22:59:28.127592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.513 [2024-12-09 22:59:28.127646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:12.513 [2024-12-09 22:59:28.127690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.513 [2024-12-09 22:59:28.130217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.513 [2024-12-09 22:59:28.130298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:12.513 pt4 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.513 [2024-12-09 22:59:28.139504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:12.513 [2024-12-09 22:59:28.141620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.513 [2024-12-09 22:59:28.141727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.513 [2024-12-09 22:59:28.141782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:12.513 [2024-12-09 22:59:28.142020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.513 [2024-12-09 22:59:28.142033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:12.513 [2024-12-09 22:59:28.142353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.513 [2024-12-09 22:59:28.142579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.513 [2024-12-09 22:59:28.142596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.513 [2024-12-09 22:59:28.142801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.513 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.514 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.514 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.514 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.514 "name": "raid_bdev1", 00:18:12.514 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:12.514 "strip_size_kb": 64, 00:18:12.514 "state": "online", 00:18:12.514 "raid_level": "concat", 00:18:12.514 "superblock": true, 00:18:12.514 "num_base_bdevs": 4, 00:18:12.514 "num_base_bdevs_discovered": 4, 00:18:12.514 "num_base_bdevs_operational": 4, 00:18:12.514 "base_bdevs_list": [ 00:18:12.514 { 00:18:12.514 "name": "pt1", 00:18:12.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.514 "is_configured": true, 00:18:12.514 "data_offset": 2048, 00:18:12.514 "data_size": 63488 00:18:12.514 }, 00:18:12.514 { 00:18:12.514 "name": "pt2", 00:18:12.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.514 "is_configured": true, 00:18:12.514 "data_offset": 2048, 00:18:12.514 "data_size": 63488 00:18:12.514 }, 00:18:12.514 { 00:18:12.514 "name": "pt3", 00:18:12.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.514 "is_configured": true, 00:18:12.514 "data_offset": 2048, 00:18:12.514 "data_size": 63488 00:18:12.514 }, 00:18:12.514 { 00:18:12.514 "name": "pt4", 00:18:12.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.514 "is_configured": true, 00:18:12.514 "data_offset": 2048, 00:18:12.514 "data_size": 63488 00:18:12.514 } 00:18:12.514 ] 00:18:12.514 }' 00:18:12.514 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.514 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.773 [2024-12-09 22:59:28.591082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.773 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.773 "name": "raid_bdev1", 00:18:12.773 "aliases": [ 00:18:12.773 "6486e6f2-6493-4c04-bdb5-46a67a03c842" 00:18:12.773 ], 00:18:12.773 "product_name": "Raid Volume", 00:18:12.773 "block_size": 512, 00:18:12.773 "num_blocks": 253952, 00:18:12.773 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:12.773 "assigned_rate_limits": { 00:18:12.774 "rw_ios_per_sec": 0, 00:18:12.774 "rw_mbytes_per_sec": 0, 00:18:12.774 "r_mbytes_per_sec": 0, 00:18:12.774 "w_mbytes_per_sec": 0 00:18:12.774 }, 00:18:12.774 "claimed": false, 00:18:12.774 "zoned": false, 00:18:12.774 "supported_io_types": { 00:18:12.774 "read": true, 00:18:12.774 "write": true, 00:18:12.774 "unmap": true, 00:18:12.774 "flush": true, 00:18:12.774 "reset": true, 00:18:12.774 "nvme_admin": false, 00:18:12.774 "nvme_io": false, 00:18:12.774 "nvme_io_md": false, 00:18:12.774 "write_zeroes": true, 00:18:12.774 "zcopy": false, 00:18:12.774 "get_zone_info": false, 00:18:12.774 "zone_management": false, 00:18:12.774 "zone_append": false, 00:18:12.774 "compare": false, 00:18:12.774 "compare_and_write": false, 00:18:12.774 "abort": false, 00:18:12.774 "seek_hole": false, 00:18:12.774 "seek_data": false, 00:18:12.774 "copy": false, 00:18:12.774 "nvme_iov_md": false 00:18:12.774 }, 00:18:12.774 "memory_domains": [ 00:18:12.774 { 00:18:12.774 "dma_device_id": "system", 00:18:12.774 "dma_device_type": 1 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.774 "dma_device_type": 2 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "system", 00:18:12.774 "dma_device_type": 1 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.774 "dma_device_type": 2 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "system", 00:18:12.774 "dma_device_type": 1 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.774 "dma_device_type": 2 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "system", 00:18:12.774 "dma_device_type": 1 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.774 "dma_device_type": 2 00:18:12.774 } 00:18:12.774 ], 00:18:12.774 "driver_specific": { 00:18:12.774 "raid": { 00:18:12.774 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:12.774 "strip_size_kb": 64, 00:18:12.774 "state": "online", 00:18:12.774 "raid_level": "concat", 00:18:12.774 "superblock": true, 00:18:12.774 "num_base_bdevs": 4, 00:18:12.774 "num_base_bdevs_discovered": 4, 00:18:12.774 "num_base_bdevs_operational": 4, 00:18:12.774 "base_bdevs_list": [ 00:18:12.774 { 00:18:12.774 "name": "pt1", 00:18:12.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.774 "is_configured": true, 00:18:12.774 "data_offset": 2048, 00:18:12.774 "data_size": 63488 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "name": "pt2", 00:18:12.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.774 "is_configured": true, 00:18:12.774 "data_offset": 2048, 00:18:12.774 "data_size": 63488 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "name": "pt3", 00:18:12.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.774 "is_configured": true, 00:18:12.774 "data_offset": 2048, 00:18:12.774 "data_size": 63488 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "name": "pt4", 00:18:12.774 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.774 "is_configured": true, 00:18:12.774 "data_offset": 2048, 00:18:12.774 "data_size": 63488 00:18:12.774 } 00:18:12.774 ] 00:18:12.774 } 00:18:12.774 } 00:18:12.774 }' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:13.034 pt2 00:18:13.034 pt3 00:18:13.034 pt4' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.034 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.035 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.035 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.035 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:13.035 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.035 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.035 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:13.295 [2024-12-09 22:59:28.938494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6486e6f2-6493-4c04-bdb5-46a67a03c842 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6486e6f2-6493-4c04-bdb5-46a67a03c842 ']' 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 [2024-12-09 22:59:28.986025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.295 [2024-12-09 22:59:28.986055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.295 [2024-12-09 22:59:28.986150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.295 [2024-12-09 22:59:28.986226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.295 [2024-12-09 22:59:28.986242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.295 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.295 [2024-12-09 22:59:29.149806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:13.556 [2024-12-09 22:59:29.151981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:13.556 [2024-12-09 22:59:29.152099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:13.556 [2024-12-09 22:59:29.152164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:13.556 [2024-12-09 22:59:29.152272] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:13.556 [2024-12-09 22:59:29.152380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:13.556 [2024-12-09 22:59:29.152469] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:13.556 [2024-12-09 22:59:29.152542] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:13.556 [2024-12-09 22:59:29.152609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.556 [2024-12-09 22:59:29.152627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:13.556 request: 00:18:13.556 { 00:18:13.556 "name": "raid_bdev1", 00:18:13.556 "raid_level": "concat", 00:18:13.556 "base_bdevs": [ 00:18:13.556 "malloc1", 00:18:13.556 "malloc2", 00:18:13.556 "malloc3", 00:18:13.556 "malloc4" 00:18:13.556 ], 00:18:13.556 "strip_size_kb": 64, 00:18:13.556 "superblock": false, 00:18:13.556 "method": "bdev_raid_create", 00:18:13.556 "req_id": 1 00:18:13.556 } 00:18:13.556 Got JSON-RPC error response 00:18:13.556 response: 00:18:13.556 { 00:18:13.556 "code": -17, 00:18:13.557 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:13.557 } 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.557 [2024-12-09 22:59:29.217644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.557 [2024-12-09 22:59:29.217771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.557 [2024-12-09 22:59:29.217796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:13.557 [2024-12-09 22:59:29.217808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.557 [2024-12-09 22:59:29.220226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.557 [2024-12-09 22:59:29.220271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.557 [2024-12-09 22:59:29.220372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.557 [2024-12-09 22:59:29.220452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.557 pt1 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.557 "name": "raid_bdev1", 00:18:13.557 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:13.557 "strip_size_kb": 64, 00:18:13.557 "state": "configuring", 00:18:13.557 "raid_level": "concat", 00:18:13.557 "superblock": true, 00:18:13.557 "num_base_bdevs": 4, 00:18:13.557 "num_base_bdevs_discovered": 1, 00:18:13.557 "num_base_bdevs_operational": 4, 00:18:13.557 "base_bdevs_list": [ 00:18:13.557 { 00:18:13.557 "name": "pt1", 00:18:13.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.557 "is_configured": true, 00:18:13.557 "data_offset": 2048, 00:18:13.557 "data_size": 63488 00:18:13.557 }, 00:18:13.557 { 00:18:13.557 "name": null, 00:18:13.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.557 "is_configured": false, 00:18:13.557 "data_offset": 2048, 00:18:13.557 "data_size": 63488 00:18:13.557 }, 00:18:13.557 { 00:18:13.557 "name": null, 00:18:13.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.557 "is_configured": false, 00:18:13.557 "data_offset": 2048, 00:18:13.557 "data_size": 63488 00:18:13.557 }, 00:18:13.557 { 00:18:13.557 "name": null, 00:18:13.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.557 "is_configured": false, 00:18:13.557 "data_offset": 2048, 00:18:13.557 "data_size": 63488 00:18:13.557 } 00:18:13.557 ] 00:18:13.557 }' 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.557 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.817 [2024-12-09 22:59:29.664951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.817 [2024-12-09 22:59:29.665096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.817 [2024-12-09 22:59:29.665141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:13.817 [2024-12-09 22:59:29.665197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.817 [2024-12-09 22:59:29.665765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.817 [2024-12-09 22:59:29.665836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.817 [2024-12-09 22:59:29.665974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:13.817 [2024-12-09 22:59:29.666034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.817 pt2 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.817 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.077 [2024-12-09 22:59:29.676961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.077 "name": "raid_bdev1", 00:18:14.077 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:14.077 "strip_size_kb": 64, 00:18:14.077 "state": "configuring", 00:18:14.077 "raid_level": "concat", 00:18:14.077 "superblock": true, 00:18:14.077 "num_base_bdevs": 4, 00:18:14.077 "num_base_bdevs_discovered": 1, 00:18:14.077 "num_base_bdevs_operational": 4, 00:18:14.077 "base_bdevs_list": [ 00:18:14.077 { 00:18:14.077 "name": "pt1", 00:18:14.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.077 "is_configured": true, 00:18:14.077 "data_offset": 2048, 00:18:14.077 "data_size": 63488 00:18:14.077 }, 00:18:14.077 { 00:18:14.077 "name": null, 00:18:14.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.077 "is_configured": false, 00:18:14.077 "data_offset": 0, 00:18:14.077 "data_size": 63488 00:18:14.077 }, 00:18:14.077 { 00:18:14.077 "name": null, 00:18:14.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.077 "is_configured": false, 00:18:14.077 "data_offset": 2048, 00:18:14.077 "data_size": 63488 00:18:14.077 }, 00:18:14.077 { 00:18:14.077 "name": null, 00:18:14.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:14.077 "is_configured": false, 00:18:14.077 "data_offset": 2048, 00:18:14.077 "data_size": 63488 00:18:14.077 } 00:18:14.077 ] 00:18:14.077 }' 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.077 22:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.337 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:14.337 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:14.337 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.337 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.337 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.337 [2024-12-09 22:59:30.164638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.337 [2024-12-09 22:59:30.164804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.338 [2024-12-09 22:59:30.164852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:14.338 [2024-12-09 22:59:30.164904] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.338 [2024-12-09 22:59:30.165442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.338 [2024-12-09 22:59:30.165489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.338 [2024-12-09 22:59:30.165592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:14.338 [2024-12-09 22:59:30.165617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.338 pt2 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.338 [2024-12-09 22:59:30.176612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:14.338 [2024-12-09 22:59:30.176740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.338 [2024-12-09 22:59:30.176769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:14.338 [2024-12-09 22:59:30.176779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.338 [2024-12-09 22:59:30.177287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.338 [2024-12-09 22:59:30.177307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:14.338 [2024-12-09 22:59:30.177401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:14.338 [2024-12-09 22:59:30.177434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:14.338 pt3 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.338 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.338 [2024-12-09 22:59:30.188578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:14.338 [2024-12-09 22:59:30.188690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.338 [2024-12-09 22:59:30.188714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:14.338 [2024-12-09 22:59:30.188725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.338 [2024-12-09 22:59:30.189222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.338 [2024-12-09 22:59:30.189246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:14.338 [2024-12-09 22:59:30.189341] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:14.338 [2024-12-09 22:59:30.189370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:14.338 [2024-12-09 22:59:30.189584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:14.338 [2024-12-09 22:59:30.189612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:14.338 [2024-12-09 22:59:30.189880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:14.338 [2024-12-09 22:59:30.190055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:14.338 [2024-12-09 22:59:30.190069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:14.338 [2024-12-09 22:59:30.190216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.598 pt4 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.598 "name": "raid_bdev1", 00:18:14.598 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:14.598 "strip_size_kb": 64, 00:18:14.598 "state": "online", 00:18:14.598 "raid_level": "concat", 00:18:14.598 "superblock": true, 00:18:14.598 "num_base_bdevs": 4, 00:18:14.598 "num_base_bdevs_discovered": 4, 00:18:14.598 "num_base_bdevs_operational": 4, 00:18:14.598 "base_bdevs_list": [ 00:18:14.598 { 00:18:14.598 "name": "pt1", 00:18:14.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.598 "is_configured": true, 00:18:14.598 "data_offset": 2048, 00:18:14.598 "data_size": 63488 00:18:14.598 }, 00:18:14.598 { 00:18:14.598 "name": "pt2", 00:18:14.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.598 "is_configured": true, 00:18:14.598 "data_offset": 2048, 00:18:14.598 "data_size": 63488 00:18:14.598 }, 00:18:14.598 { 00:18:14.598 "name": "pt3", 00:18:14.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.598 "is_configured": true, 00:18:14.598 "data_offset": 2048, 00:18:14.598 "data_size": 63488 00:18:14.598 }, 00:18:14.598 { 00:18:14.598 "name": "pt4", 00:18:14.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:14.598 "is_configured": true, 00:18:14.598 "data_offset": 2048, 00:18:14.598 "data_size": 63488 00:18:14.598 } 00:18:14.598 ] 00:18:14.598 }' 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.598 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.858 [2024-12-09 22:59:30.688136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.858 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.117 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.117 "name": "raid_bdev1", 00:18:15.117 "aliases": [ 00:18:15.117 "6486e6f2-6493-4c04-bdb5-46a67a03c842" 00:18:15.117 ], 00:18:15.117 "product_name": "Raid Volume", 00:18:15.117 "block_size": 512, 00:18:15.117 "num_blocks": 253952, 00:18:15.117 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:15.117 "assigned_rate_limits": { 00:18:15.117 "rw_ios_per_sec": 0, 00:18:15.117 "rw_mbytes_per_sec": 0, 00:18:15.117 "r_mbytes_per_sec": 0, 00:18:15.117 "w_mbytes_per_sec": 0 00:18:15.117 }, 00:18:15.117 "claimed": false, 00:18:15.117 "zoned": false, 00:18:15.117 "supported_io_types": { 00:18:15.117 "read": true, 00:18:15.117 "write": true, 00:18:15.117 "unmap": true, 00:18:15.117 "flush": true, 00:18:15.117 "reset": true, 00:18:15.117 "nvme_admin": false, 00:18:15.117 "nvme_io": false, 00:18:15.117 "nvme_io_md": false, 00:18:15.117 "write_zeroes": true, 00:18:15.117 "zcopy": false, 00:18:15.117 "get_zone_info": false, 00:18:15.117 "zone_management": false, 00:18:15.117 "zone_append": false, 00:18:15.117 "compare": false, 00:18:15.117 "compare_and_write": false, 00:18:15.117 "abort": false, 00:18:15.117 "seek_hole": false, 00:18:15.117 "seek_data": false, 00:18:15.117 "copy": false, 00:18:15.117 "nvme_iov_md": false 00:18:15.117 }, 00:18:15.117 "memory_domains": [ 00:18:15.117 { 00:18:15.117 "dma_device_id": "system", 00:18:15.117 "dma_device_type": 1 00:18:15.117 }, 00:18:15.117 { 00:18:15.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.118 "dma_device_type": 2 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "dma_device_id": "system", 00:18:15.118 "dma_device_type": 1 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.118 "dma_device_type": 2 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "dma_device_id": "system", 00:18:15.118 "dma_device_type": 1 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.118 "dma_device_type": 2 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "dma_device_id": "system", 00:18:15.118 "dma_device_type": 1 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.118 "dma_device_type": 2 00:18:15.118 } 00:18:15.118 ], 00:18:15.118 "driver_specific": { 00:18:15.118 "raid": { 00:18:15.118 "uuid": "6486e6f2-6493-4c04-bdb5-46a67a03c842", 00:18:15.118 "strip_size_kb": 64, 00:18:15.118 "state": "online", 00:18:15.118 "raid_level": "concat", 00:18:15.118 "superblock": true, 00:18:15.118 "num_base_bdevs": 4, 00:18:15.118 "num_base_bdevs_discovered": 4, 00:18:15.118 "num_base_bdevs_operational": 4, 00:18:15.118 "base_bdevs_list": [ 00:18:15.118 { 00:18:15.118 "name": "pt1", 00:18:15.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.118 "is_configured": true, 00:18:15.118 "data_offset": 2048, 00:18:15.118 "data_size": 63488 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "name": "pt2", 00:18:15.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.118 "is_configured": true, 00:18:15.118 "data_offset": 2048, 00:18:15.118 "data_size": 63488 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "name": "pt3", 00:18:15.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:15.118 "is_configured": true, 00:18:15.118 "data_offset": 2048, 00:18:15.118 "data_size": 63488 00:18:15.118 }, 00:18:15.118 { 00:18:15.118 "name": "pt4", 00:18:15.118 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:15.118 "is_configured": true, 00:18:15.118 "data_offset": 2048, 00:18:15.118 "data_size": 63488 00:18:15.118 } 00:18:15.118 ] 00:18:15.118 } 00:18:15.118 } 00:18:15.118 }' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:15.118 pt2 00:18:15.118 pt3 00:18:15.118 pt4' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.118 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.378 22:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.378 [2024-12-09 22:59:31.031632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6486e6f2-6493-4c04-bdb5-46a67a03c842 '!=' 6486e6f2-6493-4c04-bdb5-46a67a03c842 ']' 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73218 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73218 ']' 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73218 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73218 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73218' 00:18:15.378 killing process with pid 73218 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73218 00:18:15.378 [2024-12-09 22:59:31.112254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.378 [2024-12-09 22:59:31.112420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.378 22:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73218 00:18:15.378 [2024-12-09 22:59:31.112558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.378 [2024-12-09 22:59:31.112571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:15.946 [2024-12-09 22:59:31.534043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.326 22:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:17.326 00:18:17.326 real 0m5.994s 00:18:17.326 user 0m8.518s 00:18:17.326 sys 0m1.046s 00:18:17.326 22:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.326 22:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.326 ************************************ 00:18:17.326 END TEST raid_superblock_test 00:18:17.326 ************************************ 00:18:17.326 22:59:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:18:17.326 22:59:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:17.326 22:59:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.326 22:59:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.326 ************************************ 00:18:17.326 START TEST raid_read_error_test 00:18:17.326 ************************************ 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ktIoIKygVO 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73488 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73488 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73488 ']' 00:18:17.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.326 22:59:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.326 [2024-12-09 22:59:33.048990] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:18:17.326 [2024-12-09 22:59:33.049127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73488 ] 00:18:17.585 [2024-12-09 22:59:33.213261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.585 [2024-12-09 22:59:33.347802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.845 [2024-12-09 22:59:33.574316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.845 [2024-12-09 22:59:33.574424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.105 22:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.105 22:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:18.105 22:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:18.105 22:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:18.105 22:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.105 22:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.365 BaseBdev1_malloc 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.365 true 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.365 [2024-12-09 22:59:34.020738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:18.365 [2024-12-09 22:59:34.020862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.365 [2024-12-09 22:59:34.020893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:18.365 [2024-12-09 22:59:34.020907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.365 [2024-12-09 22:59:34.023442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.365 [2024-12-09 22:59:34.023503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:18.365 BaseBdev1 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.365 BaseBdev2_malloc 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.365 true 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.365 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.365 [2024-12-09 22:59:34.093180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:18.365 [2024-12-09 22:59:34.093247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.365 [2024-12-09 22:59:34.093269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:18.365 [2024-12-09 22:59:34.093282] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.366 [2024-12-09 22:59:34.095789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.366 [2024-12-09 22:59:34.095833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:18.366 BaseBdev2 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.366 BaseBdev3_malloc 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.366 true 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.366 [2024-12-09 22:59:34.178743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:18.366 [2024-12-09 22:59:34.178809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.366 [2024-12-09 22:59:34.178835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:18.366 [2024-12-09 22:59:34.178848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.366 [2024-12-09 22:59:34.181411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.366 [2024-12-09 22:59:34.181471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:18.366 BaseBdev3 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.366 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.626 BaseBdev4_malloc 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.626 true 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.626 [2024-12-09 22:59:34.252613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:18.626 [2024-12-09 22:59:34.252691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.626 [2024-12-09 22:59:34.252719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:18.626 [2024-12-09 22:59:34.252732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.626 [2024-12-09 22:59:34.255334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.626 [2024-12-09 22:59:34.255392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:18.626 BaseBdev4 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.626 [2024-12-09 22:59:34.264713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.626 [2024-12-09 22:59:34.266931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.626 [2024-12-09 22:59:34.267152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.626 [2024-12-09 22:59:34.267251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:18.626 [2024-12-09 22:59:34.267620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:18.626 [2024-12-09 22:59:34.267644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:18.626 [2024-12-09 22:59:34.267987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:18.626 [2024-12-09 22:59:34.268189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:18.626 [2024-12-09 22:59:34.268203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:18.626 [2024-12-09 22:59:34.268439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.626 "name": "raid_bdev1", 00:18:18.626 "uuid": "7091b142-5ee0-40fb-b022-b5efc49facc4", 00:18:18.626 "strip_size_kb": 64, 00:18:18.626 "state": "online", 00:18:18.626 "raid_level": "concat", 00:18:18.626 "superblock": true, 00:18:18.626 "num_base_bdevs": 4, 00:18:18.626 "num_base_bdevs_discovered": 4, 00:18:18.626 "num_base_bdevs_operational": 4, 00:18:18.626 "base_bdevs_list": [ 00:18:18.626 { 00:18:18.626 "name": "BaseBdev1", 00:18:18.626 "uuid": "e548fe75-c169-5dfb-93f4-ba9fd81ca879", 00:18:18.626 "is_configured": true, 00:18:18.626 "data_offset": 2048, 00:18:18.626 "data_size": 63488 00:18:18.626 }, 00:18:18.626 { 00:18:18.626 "name": "BaseBdev2", 00:18:18.626 "uuid": "7ed8ec69-36c3-5c07-8bb7-e56f45f50632", 00:18:18.626 "is_configured": true, 00:18:18.626 "data_offset": 2048, 00:18:18.626 "data_size": 63488 00:18:18.626 }, 00:18:18.626 { 00:18:18.626 "name": "BaseBdev3", 00:18:18.626 "uuid": "cd6430b0-9080-5f4f-9605-a9420b7654b1", 00:18:18.626 "is_configured": true, 00:18:18.626 "data_offset": 2048, 00:18:18.626 "data_size": 63488 00:18:18.626 }, 00:18:18.626 { 00:18:18.626 "name": "BaseBdev4", 00:18:18.626 "uuid": "f52d9ed6-d7dd-5c1f-bcc2-f8161b5ef429", 00:18:18.626 "is_configured": true, 00:18:18.626 "data_offset": 2048, 00:18:18.626 "data_size": 63488 00:18:18.626 } 00:18:18.626 ] 00:18:18.626 }' 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.626 22:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.886 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:18.886 22:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:19.145 [2024-12-09 22:59:34.822307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.089 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.089 "name": "raid_bdev1", 00:18:20.089 "uuid": "7091b142-5ee0-40fb-b022-b5efc49facc4", 00:18:20.089 "strip_size_kb": 64, 00:18:20.089 "state": "online", 00:18:20.089 "raid_level": "concat", 00:18:20.089 "superblock": true, 00:18:20.089 "num_base_bdevs": 4, 00:18:20.089 "num_base_bdevs_discovered": 4, 00:18:20.089 "num_base_bdevs_operational": 4, 00:18:20.089 "base_bdevs_list": [ 00:18:20.089 { 00:18:20.089 "name": "BaseBdev1", 00:18:20.089 "uuid": "e548fe75-c169-5dfb-93f4-ba9fd81ca879", 00:18:20.089 "is_configured": true, 00:18:20.089 "data_offset": 2048, 00:18:20.089 "data_size": 63488 00:18:20.089 }, 00:18:20.089 { 00:18:20.089 "name": "BaseBdev2", 00:18:20.089 "uuid": "7ed8ec69-36c3-5c07-8bb7-e56f45f50632", 00:18:20.089 "is_configured": true, 00:18:20.090 "data_offset": 2048, 00:18:20.090 "data_size": 63488 00:18:20.090 }, 00:18:20.090 { 00:18:20.090 "name": "BaseBdev3", 00:18:20.090 "uuid": "cd6430b0-9080-5f4f-9605-a9420b7654b1", 00:18:20.090 "is_configured": true, 00:18:20.090 "data_offset": 2048, 00:18:20.090 "data_size": 63488 00:18:20.090 }, 00:18:20.090 { 00:18:20.090 "name": "BaseBdev4", 00:18:20.090 "uuid": "f52d9ed6-d7dd-5c1f-bcc2-f8161b5ef429", 00:18:20.090 "is_configured": true, 00:18:20.090 "data_offset": 2048, 00:18:20.090 "data_size": 63488 00:18:20.090 } 00:18:20.090 ] 00:18:20.090 }' 00:18:20.090 22:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.090 22:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.659 [2024-12-09 22:59:36.232915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.659 [2024-12-09 22:59:36.233026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.659 [2024-12-09 22:59:36.236275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.659 [2024-12-09 22:59:36.236387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.659 [2024-12-09 22:59:36.236452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.659 [2024-12-09 22:59:36.236486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:20.659 { 00:18:20.659 "results": [ 00:18:20.659 { 00:18:20.659 "job": "raid_bdev1", 00:18:20.659 "core_mask": "0x1", 00:18:20.659 "workload": "randrw", 00:18:20.659 "percentage": 50, 00:18:20.659 "status": "finished", 00:18:20.659 "queue_depth": 1, 00:18:20.659 "io_size": 131072, 00:18:20.659 "runtime": 1.411147, 00:18:20.659 "iops": 12481.336104601434, 00:18:20.659 "mibps": 1560.1670130751793, 00:18:20.659 "io_failed": 1, 00:18:20.659 "io_timeout": 0, 00:18:20.659 "avg_latency_us": 110.84756855280362, 00:18:20.659 "min_latency_us": 28.841921397379913, 00:18:20.659 "max_latency_us": 1717.1004366812226 00:18:20.659 } 00:18:20.659 ], 00:18:20.659 "core_count": 1 00:18:20.659 } 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73488 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73488 ']' 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73488 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73488 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73488' 00:18:20.659 killing process with pid 73488 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73488 00:18:20.659 [2024-12-09 22:59:36.282721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.659 22:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73488 00:18:20.918 [2024-12-09 22:59:36.664210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ktIoIKygVO 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:18:22.295 00:18:22.295 real 0m5.130s 00:18:22.295 user 0m6.074s 00:18:22.295 sys 0m0.631s 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.295 22:59:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.295 ************************************ 00:18:22.295 END TEST raid_read_error_test 00:18:22.295 ************************************ 00:18:22.295 22:59:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:18:22.295 22:59:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:22.295 22:59:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.295 22:59:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.295 ************************************ 00:18:22.295 START TEST raid_write_error_test 00:18:22.295 ************************************ 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:22.295 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Jq6lFY1LIe 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73634 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73634 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73634 ']' 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.555 22:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.555 [2024-12-09 22:59:38.247737] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:18:22.555 [2024-12-09 22:59:38.247865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73634 ] 00:18:22.813 [2024-12-09 22:59:38.418153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.813 [2024-12-09 22:59:38.552614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.073 [2024-12-09 22:59:38.785308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.073 [2024-12-09 22:59:38.785387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 BaseBdev1_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 true 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 [2024-12-09 22:59:39.274277] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:23.642 [2024-12-09 22:59:39.274355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.642 [2024-12-09 22:59:39.274384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:23.642 [2024-12-09 22:59:39.274399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.642 [2024-12-09 22:59:39.276976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.642 [2024-12-09 22:59:39.277022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.642 BaseBdev1 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 BaseBdev2_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 true 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 [2024-12-09 22:59:39.347811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:23.642 [2024-12-09 22:59:39.347882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.642 [2024-12-09 22:59:39.347904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:23.642 [2024-12-09 22:59:39.347916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.642 [2024-12-09 22:59:39.350445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.642 [2024-12-09 22:59:39.350506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:23.642 BaseBdev2 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 BaseBdev3_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 true 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.642 [2024-12-09 22:59:39.461822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:23.642 [2024-12-09 22:59:39.461906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.642 [2024-12-09 22:59:39.461935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:23.642 [2024-12-09 22:59:39.461950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.642 [2024-12-09 22:59:39.464952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.642 [2024-12-09 22:59:39.464997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:23.642 BaseBdev3 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.642 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.902 BaseBdev4_malloc 00:18:23.902 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.903 true 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.903 [2024-12-09 22:59:39.543668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:23.903 [2024-12-09 22:59:39.543748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.903 [2024-12-09 22:59:39.543777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:23.903 [2024-12-09 22:59:39.543791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.903 [2024-12-09 22:59:39.546821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.903 [2024-12-09 22:59:39.546868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:23.903 BaseBdev4 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.903 [2024-12-09 22:59:39.555844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.903 [2024-12-09 22:59:39.558365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.903 [2024-12-09 22:59:39.558493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.903 [2024-12-09 22:59:39.558572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:23.903 [2024-12-09 22:59:39.558863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:23.903 [2024-12-09 22:59:39.558902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:23.903 [2024-12-09 22:59:39.559248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:23.903 [2024-12-09 22:59:39.559485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:23.903 [2024-12-09 22:59:39.559504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:23.903 [2024-12-09 22:59:39.559773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.903 "name": "raid_bdev1", 00:18:23.903 "uuid": "cb4b2201-ed4b-41f4-969a-69d66f17929b", 00:18:23.903 "strip_size_kb": 64, 00:18:23.903 "state": "online", 00:18:23.903 "raid_level": "concat", 00:18:23.903 "superblock": true, 00:18:23.903 "num_base_bdevs": 4, 00:18:23.903 "num_base_bdevs_discovered": 4, 00:18:23.903 "num_base_bdevs_operational": 4, 00:18:23.903 "base_bdevs_list": [ 00:18:23.903 { 00:18:23.903 "name": "BaseBdev1", 00:18:23.903 "uuid": "05265e1e-b096-5ca7-bc81-66ec3bebc9e0", 00:18:23.903 "is_configured": true, 00:18:23.903 "data_offset": 2048, 00:18:23.903 "data_size": 63488 00:18:23.903 }, 00:18:23.903 { 00:18:23.903 "name": "BaseBdev2", 00:18:23.903 "uuid": "e2df8cb7-454f-5733-868a-ecca1bc9ece7", 00:18:23.903 "is_configured": true, 00:18:23.903 "data_offset": 2048, 00:18:23.903 "data_size": 63488 00:18:23.903 }, 00:18:23.903 { 00:18:23.903 "name": "BaseBdev3", 00:18:23.903 "uuid": "fcf87d6f-7533-516a-bcf7-b5032b2ed3b2", 00:18:23.903 "is_configured": true, 00:18:23.903 "data_offset": 2048, 00:18:23.903 "data_size": 63488 00:18:23.903 }, 00:18:23.903 { 00:18:23.903 "name": "BaseBdev4", 00:18:23.903 "uuid": "3527770d-ae9d-517e-8afe-c0c9e4b1bafb", 00:18:23.903 "is_configured": true, 00:18:23.903 "data_offset": 2048, 00:18:23.903 "data_size": 63488 00:18:23.903 } 00:18:23.903 ] 00:18:23.903 }' 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.903 22:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.471 22:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:24.471 22:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:24.471 [2024-12-09 22:59:40.160961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.407 "name": "raid_bdev1", 00:18:25.407 "uuid": "cb4b2201-ed4b-41f4-969a-69d66f17929b", 00:18:25.407 "strip_size_kb": 64, 00:18:25.407 "state": "online", 00:18:25.407 "raid_level": "concat", 00:18:25.407 "superblock": true, 00:18:25.407 "num_base_bdevs": 4, 00:18:25.407 "num_base_bdevs_discovered": 4, 00:18:25.407 "num_base_bdevs_operational": 4, 00:18:25.407 "base_bdevs_list": [ 00:18:25.407 { 00:18:25.407 "name": "BaseBdev1", 00:18:25.407 "uuid": "05265e1e-b096-5ca7-bc81-66ec3bebc9e0", 00:18:25.407 "is_configured": true, 00:18:25.407 "data_offset": 2048, 00:18:25.407 "data_size": 63488 00:18:25.407 }, 00:18:25.407 { 00:18:25.407 "name": "BaseBdev2", 00:18:25.407 "uuid": "e2df8cb7-454f-5733-868a-ecca1bc9ece7", 00:18:25.407 "is_configured": true, 00:18:25.407 "data_offset": 2048, 00:18:25.407 "data_size": 63488 00:18:25.407 }, 00:18:25.407 { 00:18:25.407 "name": "BaseBdev3", 00:18:25.407 "uuid": "fcf87d6f-7533-516a-bcf7-b5032b2ed3b2", 00:18:25.407 "is_configured": true, 00:18:25.407 "data_offset": 2048, 00:18:25.407 "data_size": 63488 00:18:25.407 }, 00:18:25.407 { 00:18:25.407 "name": "BaseBdev4", 00:18:25.407 "uuid": "3527770d-ae9d-517e-8afe-c0c9e4b1bafb", 00:18:25.407 "is_configured": true, 00:18:25.407 "data_offset": 2048, 00:18:25.407 "data_size": 63488 00:18:25.407 } 00:18:25.407 ] 00:18:25.407 }' 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.407 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.974 [2024-12-09 22:59:41.564682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.974 [2024-12-09 22:59:41.564746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.974 [2024-12-09 22:59:41.568157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.974 [2024-12-09 22:59:41.568243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.974 [2024-12-09 22:59:41.568306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.974 [2024-12-09 22:59:41.568331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:25.974 { 00:18:25.974 "results": [ 00:18:25.974 { 00:18:25.974 "job": "raid_bdev1", 00:18:25.974 "core_mask": "0x1", 00:18:25.974 "workload": "randrw", 00:18:25.974 "percentage": 50, 00:18:25.974 "status": "finished", 00:18:25.974 "queue_depth": 1, 00:18:25.974 "io_size": 131072, 00:18:25.974 "runtime": 1.403901, 00:18:25.974 "iops": 10850.480197677756, 00:18:25.974 "mibps": 1356.3100247097195, 00:18:25.974 "io_failed": 1, 00:18:25.974 "io_timeout": 0, 00:18:25.974 "avg_latency_us": 129.0918312462413, 00:18:25.974 "min_latency_us": 33.98427947598253, 00:18:25.974 "max_latency_us": 1795.8008733624454 00:18:25.974 } 00:18:25.974 ], 00:18:25.974 "core_count": 1 00:18:25.974 } 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73634 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73634 ']' 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73634 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73634 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.974 killing process with pid 73634 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73634' 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73634 00:18:25.974 [2024-12-09 22:59:41.606405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.974 22:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73634 00:18:26.234 [2024-12-09 22:59:42.043590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Jq6lFY1LIe 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:18:28.140 00:18:28.140 real 0m5.459s 00:18:28.140 user 0m6.420s 00:18:28.140 sys 0m0.635s 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.140 22:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.141 ************************************ 00:18:28.141 END TEST raid_write_error_test 00:18:28.141 ************************************ 00:18:28.141 22:59:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:28.141 22:59:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:28.141 22:59:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:28.141 22:59:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.141 22:59:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.141 ************************************ 00:18:28.141 START TEST raid_state_function_test 00:18:28.141 ************************************ 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73783 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:28.141 Process raid pid: 73783 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73783' 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73783 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73783 ']' 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.141 22:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.141 [2024-12-09 22:59:43.798155] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:18:28.141 [2024-12-09 22:59:43.798305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.141 [2024-12-09 22:59:43.987801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.400 [2024-12-09 22:59:44.152760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.659 [2024-12-09 22:59:44.440592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.659 [2024-12-09 22:59:44.440679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.918 [2024-12-09 22:59:44.713190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.918 [2024-12-09 22:59:44.713272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.918 [2024-12-09 22:59:44.713292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.918 [2024-12-09 22:59:44.713306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.918 [2024-12-09 22:59:44.713313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.918 [2024-12-09 22:59:44.713326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.918 [2024-12-09 22:59:44.713333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.918 [2024-12-09 22:59:44.713343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.918 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.184 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.184 "name": "Existed_Raid", 00:18:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.184 "strip_size_kb": 0, 00:18:29.184 "state": "configuring", 00:18:29.184 "raid_level": "raid1", 00:18:29.184 "superblock": false, 00:18:29.184 "num_base_bdevs": 4, 00:18:29.184 "num_base_bdevs_discovered": 0, 00:18:29.184 "num_base_bdevs_operational": 4, 00:18:29.184 "base_bdevs_list": [ 00:18:29.184 { 00:18:29.184 "name": "BaseBdev1", 00:18:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.185 "is_configured": false, 00:18:29.185 "data_offset": 0, 00:18:29.185 "data_size": 0 00:18:29.185 }, 00:18:29.185 { 00:18:29.185 "name": "BaseBdev2", 00:18:29.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.185 "is_configured": false, 00:18:29.185 "data_offset": 0, 00:18:29.185 "data_size": 0 00:18:29.185 }, 00:18:29.185 { 00:18:29.185 "name": "BaseBdev3", 00:18:29.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.185 "is_configured": false, 00:18:29.185 "data_offset": 0, 00:18:29.185 "data_size": 0 00:18:29.185 }, 00:18:29.185 { 00:18:29.185 "name": "BaseBdev4", 00:18:29.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.185 "is_configured": false, 00:18:29.185 "data_offset": 0, 00:18:29.185 "data_size": 0 00:18:29.185 } 00:18:29.185 ] 00:18:29.185 }' 00:18:29.185 22:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.185 22:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.456 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:29.456 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.457 [2024-12-09 22:59:45.228338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.457 [2024-12-09 22:59:45.228403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.457 [2024-12-09 22:59:45.240306] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:29.457 [2024-12-09 22:59:45.240373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:29.457 [2024-12-09 22:59:45.240385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.457 [2024-12-09 22:59:45.240398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.457 [2024-12-09 22:59:45.240406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:29.457 [2024-12-09 22:59:45.240418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:29.457 [2024-12-09 22:59:45.240435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:29.457 [2024-12-09 22:59:45.240447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.457 [2024-12-09 22:59:45.302566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.457 BaseBdev1 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.457 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 [ 00:18:29.715 { 00:18:29.715 "name": "BaseBdev1", 00:18:29.715 "aliases": [ 00:18:29.715 "a5df1412-fb6e-4d8e-af33-9313d86159a9" 00:18:29.715 ], 00:18:29.715 "product_name": "Malloc disk", 00:18:29.715 "block_size": 512, 00:18:29.715 "num_blocks": 65536, 00:18:29.715 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:29.715 "assigned_rate_limits": { 00:18:29.715 "rw_ios_per_sec": 0, 00:18:29.715 "rw_mbytes_per_sec": 0, 00:18:29.715 "r_mbytes_per_sec": 0, 00:18:29.715 "w_mbytes_per_sec": 0 00:18:29.715 }, 00:18:29.715 "claimed": true, 00:18:29.715 "claim_type": "exclusive_write", 00:18:29.715 "zoned": false, 00:18:29.715 "supported_io_types": { 00:18:29.715 "read": true, 00:18:29.715 "write": true, 00:18:29.715 "unmap": true, 00:18:29.715 "flush": true, 00:18:29.715 "reset": true, 00:18:29.715 "nvme_admin": false, 00:18:29.715 "nvme_io": false, 00:18:29.715 "nvme_io_md": false, 00:18:29.715 "write_zeroes": true, 00:18:29.715 "zcopy": true, 00:18:29.715 "get_zone_info": false, 00:18:29.715 "zone_management": false, 00:18:29.715 "zone_append": false, 00:18:29.715 "compare": false, 00:18:29.715 "compare_and_write": false, 00:18:29.715 "abort": true, 00:18:29.715 "seek_hole": false, 00:18:29.715 "seek_data": false, 00:18:29.715 "copy": true, 00:18:29.715 "nvme_iov_md": false 00:18:29.715 }, 00:18:29.715 "memory_domains": [ 00:18:29.715 { 00:18:29.715 "dma_device_id": "system", 00:18:29.715 "dma_device_type": 1 00:18:29.715 }, 00:18:29.715 { 00:18:29.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.715 "dma_device_type": 2 00:18:29.715 } 00:18:29.715 ], 00:18:29.715 "driver_specific": {} 00:18:29.715 } 00:18:29.715 ] 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.715 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.716 "name": "Existed_Raid", 00:18:29.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.716 "strip_size_kb": 0, 00:18:29.716 "state": "configuring", 00:18:29.716 "raid_level": "raid1", 00:18:29.716 "superblock": false, 00:18:29.716 "num_base_bdevs": 4, 00:18:29.716 "num_base_bdevs_discovered": 1, 00:18:29.716 "num_base_bdevs_operational": 4, 00:18:29.716 "base_bdevs_list": [ 00:18:29.716 { 00:18:29.716 "name": "BaseBdev1", 00:18:29.716 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:29.716 "is_configured": true, 00:18:29.716 "data_offset": 0, 00:18:29.716 "data_size": 65536 00:18:29.716 }, 00:18:29.716 { 00:18:29.716 "name": "BaseBdev2", 00:18:29.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.716 "is_configured": false, 00:18:29.716 "data_offset": 0, 00:18:29.716 "data_size": 0 00:18:29.716 }, 00:18:29.716 { 00:18:29.716 "name": "BaseBdev3", 00:18:29.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.716 "is_configured": false, 00:18:29.716 "data_offset": 0, 00:18:29.716 "data_size": 0 00:18:29.716 }, 00:18:29.716 { 00:18:29.716 "name": "BaseBdev4", 00:18:29.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.716 "is_configured": false, 00:18:29.716 "data_offset": 0, 00:18:29.716 "data_size": 0 00:18:29.716 } 00:18:29.716 ] 00:18:29.716 }' 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.716 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.976 [2024-12-09 22:59:45.817821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.976 [2024-12-09 22:59:45.817939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.976 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.976 [2024-12-09 22:59:45.829819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.236 [2024-12-09 22:59:45.832344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.236 [2024-12-09 22:59:45.832400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.236 [2024-12-09 22:59:45.832412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:30.236 [2024-12-09 22:59:45.832433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:30.236 [2024-12-09 22:59:45.832442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:30.236 [2024-12-09 22:59:45.832452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.236 "name": "Existed_Raid", 00:18:30.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.236 "strip_size_kb": 0, 00:18:30.236 "state": "configuring", 00:18:30.236 "raid_level": "raid1", 00:18:30.236 "superblock": false, 00:18:30.236 "num_base_bdevs": 4, 00:18:30.236 "num_base_bdevs_discovered": 1, 00:18:30.236 "num_base_bdevs_operational": 4, 00:18:30.236 "base_bdevs_list": [ 00:18:30.236 { 00:18:30.236 "name": "BaseBdev1", 00:18:30.236 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:30.236 "is_configured": true, 00:18:30.236 "data_offset": 0, 00:18:30.236 "data_size": 65536 00:18:30.236 }, 00:18:30.236 { 00:18:30.236 "name": "BaseBdev2", 00:18:30.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.236 "is_configured": false, 00:18:30.236 "data_offset": 0, 00:18:30.236 "data_size": 0 00:18:30.236 }, 00:18:30.236 { 00:18:30.236 "name": "BaseBdev3", 00:18:30.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.236 "is_configured": false, 00:18:30.236 "data_offset": 0, 00:18:30.236 "data_size": 0 00:18:30.236 }, 00:18:30.236 { 00:18:30.236 "name": "BaseBdev4", 00:18:30.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.236 "is_configured": false, 00:18:30.236 "data_offset": 0, 00:18:30.236 "data_size": 0 00:18:30.236 } 00:18:30.236 ] 00:18:30.236 }' 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.236 22:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.495 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.496 [2024-12-09 22:59:46.324597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.496 BaseBdev2 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.496 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.754 [ 00:18:30.754 { 00:18:30.754 "name": "BaseBdev2", 00:18:30.754 "aliases": [ 00:18:30.754 "926dd60d-254c-4f32-a04d-3ae525d130da" 00:18:30.754 ], 00:18:30.755 "product_name": "Malloc disk", 00:18:30.755 "block_size": 512, 00:18:30.755 "num_blocks": 65536, 00:18:30.755 "uuid": "926dd60d-254c-4f32-a04d-3ae525d130da", 00:18:30.755 "assigned_rate_limits": { 00:18:30.755 "rw_ios_per_sec": 0, 00:18:30.755 "rw_mbytes_per_sec": 0, 00:18:30.755 "r_mbytes_per_sec": 0, 00:18:30.755 "w_mbytes_per_sec": 0 00:18:30.755 }, 00:18:30.755 "claimed": true, 00:18:30.755 "claim_type": "exclusive_write", 00:18:30.755 "zoned": false, 00:18:30.755 "supported_io_types": { 00:18:30.755 "read": true, 00:18:30.755 "write": true, 00:18:30.755 "unmap": true, 00:18:30.755 "flush": true, 00:18:30.755 "reset": true, 00:18:30.755 "nvme_admin": false, 00:18:30.755 "nvme_io": false, 00:18:30.755 "nvme_io_md": false, 00:18:30.755 "write_zeroes": true, 00:18:30.755 "zcopy": true, 00:18:30.755 "get_zone_info": false, 00:18:30.755 "zone_management": false, 00:18:30.755 "zone_append": false, 00:18:30.755 "compare": false, 00:18:30.755 "compare_and_write": false, 00:18:30.755 "abort": true, 00:18:30.755 "seek_hole": false, 00:18:30.755 "seek_data": false, 00:18:30.755 "copy": true, 00:18:30.755 "nvme_iov_md": false 00:18:30.755 }, 00:18:30.755 "memory_domains": [ 00:18:30.755 { 00:18:30.755 "dma_device_id": "system", 00:18:30.755 "dma_device_type": 1 00:18:30.755 }, 00:18:30.755 { 00:18:30.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.755 "dma_device_type": 2 00:18:30.755 } 00:18:30.755 ], 00:18:30.755 "driver_specific": {} 00:18:30.755 } 00:18:30.755 ] 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.755 "name": "Existed_Raid", 00:18:30.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.755 "strip_size_kb": 0, 00:18:30.755 "state": "configuring", 00:18:30.755 "raid_level": "raid1", 00:18:30.755 "superblock": false, 00:18:30.755 "num_base_bdevs": 4, 00:18:30.755 "num_base_bdevs_discovered": 2, 00:18:30.755 "num_base_bdevs_operational": 4, 00:18:30.755 "base_bdevs_list": [ 00:18:30.755 { 00:18:30.755 "name": "BaseBdev1", 00:18:30.755 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:30.755 "is_configured": true, 00:18:30.755 "data_offset": 0, 00:18:30.755 "data_size": 65536 00:18:30.755 }, 00:18:30.755 { 00:18:30.755 "name": "BaseBdev2", 00:18:30.755 "uuid": "926dd60d-254c-4f32-a04d-3ae525d130da", 00:18:30.755 "is_configured": true, 00:18:30.755 "data_offset": 0, 00:18:30.755 "data_size": 65536 00:18:30.755 }, 00:18:30.755 { 00:18:30.755 "name": "BaseBdev3", 00:18:30.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.755 "is_configured": false, 00:18:30.755 "data_offset": 0, 00:18:30.755 "data_size": 0 00:18:30.755 }, 00:18:30.755 { 00:18:30.755 "name": "BaseBdev4", 00:18:30.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.755 "is_configured": false, 00:18:30.755 "data_offset": 0, 00:18:30.755 "data_size": 0 00:18:30.755 } 00:18:30.755 ] 00:18:30.755 }' 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.755 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.014 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:31.014 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.014 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.273 [2024-12-09 22:59:46.896799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.273 BaseBdev3 00:18:31.273 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.274 [ 00:18:31.274 { 00:18:31.274 "name": "BaseBdev3", 00:18:31.274 "aliases": [ 00:18:31.274 "d05453ea-7638-4cd0-89f2-28f72a817487" 00:18:31.274 ], 00:18:31.274 "product_name": "Malloc disk", 00:18:31.274 "block_size": 512, 00:18:31.274 "num_blocks": 65536, 00:18:31.274 "uuid": "d05453ea-7638-4cd0-89f2-28f72a817487", 00:18:31.274 "assigned_rate_limits": { 00:18:31.274 "rw_ios_per_sec": 0, 00:18:31.274 "rw_mbytes_per_sec": 0, 00:18:31.274 "r_mbytes_per_sec": 0, 00:18:31.274 "w_mbytes_per_sec": 0 00:18:31.274 }, 00:18:31.274 "claimed": true, 00:18:31.274 "claim_type": "exclusive_write", 00:18:31.274 "zoned": false, 00:18:31.274 "supported_io_types": { 00:18:31.274 "read": true, 00:18:31.274 "write": true, 00:18:31.274 "unmap": true, 00:18:31.274 "flush": true, 00:18:31.274 "reset": true, 00:18:31.274 "nvme_admin": false, 00:18:31.274 "nvme_io": false, 00:18:31.274 "nvme_io_md": false, 00:18:31.274 "write_zeroes": true, 00:18:31.274 "zcopy": true, 00:18:31.274 "get_zone_info": false, 00:18:31.274 "zone_management": false, 00:18:31.274 "zone_append": false, 00:18:31.274 "compare": false, 00:18:31.274 "compare_and_write": false, 00:18:31.274 "abort": true, 00:18:31.274 "seek_hole": false, 00:18:31.274 "seek_data": false, 00:18:31.274 "copy": true, 00:18:31.274 "nvme_iov_md": false 00:18:31.274 }, 00:18:31.274 "memory_domains": [ 00:18:31.274 { 00:18:31.274 "dma_device_id": "system", 00:18:31.274 "dma_device_type": 1 00:18:31.274 }, 00:18:31.274 { 00:18:31.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.274 "dma_device_type": 2 00:18:31.274 } 00:18:31.274 ], 00:18:31.274 "driver_specific": {} 00:18:31.274 } 00:18:31.274 ] 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.274 "name": "Existed_Raid", 00:18:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.274 "strip_size_kb": 0, 00:18:31.274 "state": "configuring", 00:18:31.274 "raid_level": "raid1", 00:18:31.274 "superblock": false, 00:18:31.274 "num_base_bdevs": 4, 00:18:31.274 "num_base_bdevs_discovered": 3, 00:18:31.274 "num_base_bdevs_operational": 4, 00:18:31.274 "base_bdevs_list": [ 00:18:31.274 { 00:18:31.274 "name": "BaseBdev1", 00:18:31.274 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:31.274 "is_configured": true, 00:18:31.274 "data_offset": 0, 00:18:31.274 "data_size": 65536 00:18:31.274 }, 00:18:31.274 { 00:18:31.274 "name": "BaseBdev2", 00:18:31.274 "uuid": "926dd60d-254c-4f32-a04d-3ae525d130da", 00:18:31.274 "is_configured": true, 00:18:31.274 "data_offset": 0, 00:18:31.274 "data_size": 65536 00:18:31.274 }, 00:18:31.274 { 00:18:31.274 "name": "BaseBdev3", 00:18:31.274 "uuid": "d05453ea-7638-4cd0-89f2-28f72a817487", 00:18:31.274 "is_configured": true, 00:18:31.274 "data_offset": 0, 00:18:31.274 "data_size": 65536 00:18:31.274 }, 00:18:31.274 { 00:18:31.274 "name": "BaseBdev4", 00:18:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.274 "is_configured": false, 00:18:31.274 "data_offset": 0, 00:18:31.274 "data_size": 0 00:18:31.274 } 00:18:31.274 ] 00:18:31.274 }' 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.274 22:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.843 [2024-12-09 22:59:47.483944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.843 [2024-12-09 22:59:47.484025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:31.843 [2024-12-09 22:59:47.484035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:31.843 [2024-12-09 22:59:47.484383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:31.843 [2024-12-09 22:59:47.484670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:31.843 [2024-12-09 22:59:47.484699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:31.843 [2024-12-09 22:59:47.485058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.843 BaseBdev4 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.843 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.843 [ 00:18:31.843 { 00:18:31.843 "name": "BaseBdev4", 00:18:31.843 "aliases": [ 00:18:31.843 "2431c32b-6162-44a2-9c0d-eea53dddc491" 00:18:31.843 ], 00:18:31.843 "product_name": "Malloc disk", 00:18:31.843 "block_size": 512, 00:18:31.843 "num_blocks": 65536, 00:18:31.843 "uuid": "2431c32b-6162-44a2-9c0d-eea53dddc491", 00:18:31.843 "assigned_rate_limits": { 00:18:31.843 "rw_ios_per_sec": 0, 00:18:31.843 "rw_mbytes_per_sec": 0, 00:18:31.843 "r_mbytes_per_sec": 0, 00:18:31.843 "w_mbytes_per_sec": 0 00:18:31.843 }, 00:18:31.843 "claimed": true, 00:18:31.843 "claim_type": "exclusive_write", 00:18:31.843 "zoned": false, 00:18:31.843 "supported_io_types": { 00:18:31.843 "read": true, 00:18:31.843 "write": true, 00:18:31.843 "unmap": true, 00:18:31.843 "flush": true, 00:18:31.843 "reset": true, 00:18:31.843 "nvme_admin": false, 00:18:31.843 "nvme_io": false, 00:18:31.843 "nvme_io_md": false, 00:18:31.843 "write_zeroes": true, 00:18:31.843 "zcopy": true, 00:18:31.843 "get_zone_info": false, 00:18:31.843 "zone_management": false, 00:18:31.843 "zone_append": false, 00:18:31.843 "compare": false, 00:18:31.843 "compare_and_write": false, 00:18:31.843 "abort": true, 00:18:31.843 "seek_hole": false, 00:18:31.843 "seek_data": false, 00:18:31.843 "copy": true, 00:18:31.843 "nvme_iov_md": false 00:18:31.843 }, 00:18:31.843 "memory_domains": [ 00:18:31.843 { 00:18:31.843 "dma_device_id": "system", 00:18:31.843 "dma_device_type": 1 00:18:31.843 }, 00:18:31.843 { 00:18:31.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.844 "dma_device_type": 2 00:18:31.844 } 00:18:31.844 ], 00:18:31.844 "driver_specific": {} 00:18:31.844 } 00:18:31.844 ] 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.844 "name": "Existed_Raid", 00:18:31.844 "uuid": "5558f41d-69b8-47b6-b872-8ef73e262fd3", 00:18:31.844 "strip_size_kb": 0, 00:18:31.844 "state": "online", 00:18:31.844 "raid_level": "raid1", 00:18:31.844 "superblock": false, 00:18:31.844 "num_base_bdevs": 4, 00:18:31.844 "num_base_bdevs_discovered": 4, 00:18:31.844 "num_base_bdevs_operational": 4, 00:18:31.844 "base_bdevs_list": [ 00:18:31.844 { 00:18:31.844 "name": "BaseBdev1", 00:18:31.844 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:31.844 "is_configured": true, 00:18:31.844 "data_offset": 0, 00:18:31.844 "data_size": 65536 00:18:31.844 }, 00:18:31.844 { 00:18:31.844 "name": "BaseBdev2", 00:18:31.844 "uuid": "926dd60d-254c-4f32-a04d-3ae525d130da", 00:18:31.844 "is_configured": true, 00:18:31.844 "data_offset": 0, 00:18:31.844 "data_size": 65536 00:18:31.844 }, 00:18:31.844 { 00:18:31.844 "name": "BaseBdev3", 00:18:31.844 "uuid": "d05453ea-7638-4cd0-89f2-28f72a817487", 00:18:31.844 "is_configured": true, 00:18:31.844 "data_offset": 0, 00:18:31.844 "data_size": 65536 00:18:31.844 }, 00:18:31.844 { 00:18:31.844 "name": "BaseBdev4", 00:18:31.844 "uuid": "2431c32b-6162-44a2-9c0d-eea53dddc491", 00:18:31.844 "is_configured": true, 00:18:31.844 "data_offset": 0, 00:18:31.844 "data_size": 65536 00:18:31.844 } 00:18:31.844 ] 00:18:31.844 }' 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.844 22:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.413 [2024-12-09 22:59:48.015664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.413 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.413 "name": "Existed_Raid", 00:18:32.413 "aliases": [ 00:18:32.413 "5558f41d-69b8-47b6-b872-8ef73e262fd3" 00:18:32.413 ], 00:18:32.413 "product_name": "Raid Volume", 00:18:32.413 "block_size": 512, 00:18:32.413 "num_blocks": 65536, 00:18:32.413 "uuid": "5558f41d-69b8-47b6-b872-8ef73e262fd3", 00:18:32.413 "assigned_rate_limits": { 00:18:32.413 "rw_ios_per_sec": 0, 00:18:32.413 "rw_mbytes_per_sec": 0, 00:18:32.413 "r_mbytes_per_sec": 0, 00:18:32.413 "w_mbytes_per_sec": 0 00:18:32.413 }, 00:18:32.413 "claimed": false, 00:18:32.413 "zoned": false, 00:18:32.414 "supported_io_types": { 00:18:32.414 "read": true, 00:18:32.414 "write": true, 00:18:32.414 "unmap": false, 00:18:32.414 "flush": false, 00:18:32.414 "reset": true, 00:18:32.414 "nvme_admin": false, 00:18:32.414 "nvme_io": false, 00:18:32.414 "nvme_io_md": false, 00:18:32.414 "write_zeroes": true, 00:18:32.414 "zcopy": false, 00:18:32.414 "get_zone_info": false, 00:18:32.414 "zone_management": false, 00:18:32.414 "zone_append": false, 00:18:32.414 "compare": false, 00:18:32.414 "compare_and_write": false, 00:18:32.414 "abort": false, 00:18:32.414 "seek_hole": false, 00:18:32.414 "seek_data": false, 00:18:32.414 "copy": false, 00:18:32.414 "nvme_iov_md": false 00:18:32.414 }, 00:18:32.414 "memory_domains": [ 00:18:32.414 { 00:18:32.414 "dma_device_id": "system", 00:18:32.414 "dma_device_type": 1 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.414 "dma_device_type": 2 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "system", 00:18:32.414 "dma_device_type": 1 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.414 "dma_device_type": 2 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "system", 00:18:32.414 "dma_device_type": 1 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.414 "dma_device_type": 2 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "system", 00:18:32.414 "dma_device_type": 1 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.414 "dma_device_type": 2 00:18:32.414 } 00:18:32.414 ], 00:18:32.414 "driver_specific": { 00:18:32.414 "raid": { 00:18:32.414 "uuid": "5558f41d-69b8-47b6-b872-8ef73e262fd3", 00:18:32.414 "strip_size_kb": 0, 00:18:32.414 "state": "online", 00:18:32.414 "raid_level": "raid1", 00:18:32.414 "superblock": false, 00:18:32.414 "num_base_bdevs": 4, 00:18:32.414 "num_base_bdevs_discovered": 4, 00:18:32.414 "num_base_bdevs_operational": 4, 00:18:32.414 "base_bdevs_list": [ 00:18:32.414 { 00:18:32.414 "name": "BaseBdev1", 00:18:32.414 "uuid": "a5df1412-fb6e-4d8e-af33-9313d86159a9", 00:18:32.414 "is_configured": true, 00:18:32.414 "data_offset": 0, 00:18:32.414 "data_size": 65536 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "name": "BaseBdev2", 00:18:32.414 "uuid": "926dd60d-254c-4f32-a04d-3ae525d130da", 00:18:32.414 "is_configured": true, 00:18:32.414 "data_offset": 0, 00:18:32.414 "data_size": 65536 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "name": "BaseBdev3", 00:18:32.414 "uuid": "d05453ea-7638-4cd0-89f2-28f72a817487", 00:18:32.414 "is_configured": true, 00:18:32.414 "data_offset": 0, 00:18:32.414 "data_size": 65536 00:18:32.414 }, 00:18:32.414 { 00:18:32.414 "name": "BaseBdev4", 00:18:32.414 "uuid": "2431c32b-6162-44a2-9c0d-eea53dddc491", 00:18:32.414 "is_configured": true, 00:18:32.414 "data_offset": 0, 00:18:32.414 "data_size": 65536 00:18:32.414 } 00:18:32.414 ] 00:18:32.414 } 00:18:32.414 } 00:18:32.414 }' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:32.414 BaseBdev2 00:18:32.414 BaseBdev3 00:18:32.414 BaseBdev4' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.414 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.674 [2024-12-09 22:59:48.314823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.674 "name": "Existed_Raid", 00:18:32.674 "uuid": "5558f41d-69b8-47b6-b872-8ef73e262fd3", 00:18:32.674 "strip_size_kb": 0, 00:18:32.674 "state": "online", 00:18:32.674 "raid_level": "raid1", 00:18:32.674 "superblock": false, 00:18:32.674 "num_base_bdevs": 4, 00:18:32.674 "num_base_bdevs_discovered": 3, 00:18:32.674 "num_base_bdevs_operational": 3, 00:18:32.674 "base_bdevs_list": [ 00:18:32.674 { 00:18:32.674 "name": null, 00:18:32.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.674 "is_configured": false, 00:18:32.674 "data_offset": 0, 00:18:32.674 "data_size": 65536 00:18:32.674 }, 00:18:32.674 { 00:18:32.674 "name": "BaseBdev2", 00:18:32.674 "uuid": "926dd60d-254c-4f32-a04d-3ae525d130da", 00:18:32.674 "is_configured": true, 00:18:32.674 "data_offset": 0, 00:18:32.674 "data_size": 65536 00:18:32.674 }, 00:18:32.674 { 00:18:32.674 "name": "BaseBdev3", 00:18:32.674 "uuid": "d05453ea-7638-4cd0-89f2-28f72a817487", 00:18:32.674 "is_configured": true, 00:18:32.674 "data_offset": 0, 00:18:32.674 "data_size": 65536 00:18:32.674 }, 00:18:32.674 { 00:18:32.674 "name": "BaseBdev4", 00:18:32.674 "uuid": "2431c32b-6162-44a2-9c0d-eea53dddc491", 00:18:32.674 "is_configured": true, 00:18:32.674 "data_offset": 0, 00:18:32.674 "data_size": 65536 00:18:32.674 } 00:18:32.674 ] 00:18:32.674 }' 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.674 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.243 22:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.243 [2024-12-09 22:59:48.990711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.502 [2024-12-09 22:59:49.189588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.502 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:33.764 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.764 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.764 [2024-12-09 22:59:49.362259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:33.764 [2024-12-09 22:59:49.362409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.764 [2024-12-09 22:59:49.480171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.764 [2024-12-09 22:59:49.480245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.764 [2024-12-09 22:59:49.480264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:33.764 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.765 BaseBdev2 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.765 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.765 [ 00:18:33.765 { 00:18:33.765 "name": "BaseBdev2", 00:18:33.765 "aliases": [ 00:18:33.765 "431dcd7f-ea6d-42c8-8532-5644aa35dcfa" 00:18:33.765 ], 00:18:33.765 "product_name": "Malloc disk", 00:18:33.765 "block_size": 512, 00:18:33.765 "num_blocks": 65536, 00:18:33.765 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:33.765 "assigned_rate_limits": { 00:18:33.765 "rw_ios_per_sec": 0, 00:18:33.765 "rw_mbytes_per_sec": 0, 00:18:33.765 "r_mbytes_per_sec": 0, 00:18:33.765 "w_mbytes_per_sec": 0 00:18:33.765 }, 00:18:33.765 "claimed": false, 00:18:33.765 "zoned": false, 00:18:33.765 "supported_io_types": { 00:18:33.765 "read": true, 00:18:33.765 "write": true, 00:18:33.765 "unmap": true, 00:18:33.765 "flush": true, 00:18:33.765 "reset": true, 00:18:33.765 "nvme_admin": false, 00:18:33.765 "nvme_io": false, 00:18:33.765 "nvme_io_md": false, 00:18:33.765 "write_zeroes": true, 00:18:33.765 "zcopy": true, 00:18:33.765 "get_zone_info": false, 00:18:33.765 "zone_management": false, 00:18:33.765 "zone_append": false, 00:18:33.765 "compare": false, 00:18:33.765 "compare_and_write": false, 00:18:33.765 "abort": true, 00:18:33.765 "seek_hole": false, 00:18:33.765 "seek_data": false, 00:18:33.765 "copy": true, 00:18:33.765 "nvme_iov_md": false 00:18:33.765 }, 00:18:33.765 "memory_domains": [ 00:18:33.765 { 00:18:33.765 "dma_device_id": "system", 00:18:33.765 "dma_device_type": 1 00:18:33.765 }, 00:18:33.765 { 00:18:33.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.765 "dma_device_type": 2 00:18:33.765 } 00:18:33.765 ], 00:18:33.765 "driver_specific": {} 00:18:33.765 } 00:18:33.765 ] 00:18:34.024 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.024 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.025 BaseBdev3 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.025 [ 00:18:34.025 { 00:18:34.025 "name": "BaseBdev3", 00:18:34.025 "aliases": [ 00:18:34.025 "6fdcf598-ebc5-4937-a929-d91f805322ba" 00:18:34.025 ], 00:18:34.025 "product_name": "Malloc disk", 00:18:34.025 "block_size": 512, 00:18:34.025 "num_blocks": 65536, 00:18:34.025 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:34.025 "assigned_rate_limits": { 00:18:34.025 "rw_ios_per_sec": 0, 00:18:34.025 "rw_mbytes_per_sec": 0, 00:18:34.025 "r_mbytes_per_sec": 0, 00:18:34.025 "w_mbytes_per_sec": 0 00:18:34.025 }, 00:18:34.025 "claimed": false, 00:18:34.025 "zoned": false, 00:18:34.025 "supported_io_types": { 00:18:34.025 "read": true, 00:18:34.025 "write": true, 00:18:34.025 "unmap": true, 00:18:34.025 "flush": true, 00:18:34.025 "reset": true, 00:18:34.025 "nvme_admin": false, 00:18:34.025 "nvme_io": false, 00:18:34.025 "nvme_io_md": false, 00:18:34.025 "write_zeroes": true, 00:18:34.025 "zcopy": true, 00:18:34.025 "get_zone_info": false, 00:18:34.025 "zone_management": false, 00:18:34.025 "zone_append": false, 00:18:34.025 "compare": false, 00:18:34.025 "compare_and_write": false, 00:18:34.025 "abort": true, 00:18:34.025 "seek_hole": false, 00:18:34.025 "seek_data": false, 00:18:34.025 "copy": true, 00:18:34.025 "nvme_iov_md": false 00:18:34.025 }, 00:18:34.025 "memory_domains": [ 00:18:34.025 { 00:18:34.025 "dma_device_id": "system", 00:18:34.025 "dma_device_type": 1 00:18:34.025 }, 00:18:34.025 { 00:18:34.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.025 "dma_device_type": 2 00:18:34.025 } 00:18:34.025 ], 00:18:34.025 "driver_specific": {} 00:18:34.025 } 00:18:34.025 ] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.025 BaseBdev4 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.025 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.025 [ 00:18:34.025 { 00:18:34.025 "name": "BaseBdev4", 00:18:34.025 "aliases": [ 00:18:34.025 "84869b4a-a839-4eba-b67a-021a8df4b025" 00:18:34.025 ], 00:18:34.025 "product_name": "Malloc disk", 00:18:34.025 "block_size": 512, 00:18:34.025 "num_blocks": 65536, 00:18:34.025 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:34.025 "assigned_rate_limits": { 00:18:34.025 "rw_ios_per_sec": 0, 00:18:34.025 "rw_mbytes_per_sec": 0, 00:18:34.025 "r_mbytes_per_sec": 0, 00:18:34.025 "w_mbytes_per_sec": 0 00:18:34.025 }, 00:18:34.025 "claimed": false, 00:18:34.025 "zoned": false, 00:18:34.025 "supported_io_types": { 00:18:34.025 "read": true, 00:18:34.025 "write": true, 00:18:34.025 "unmap": true, 00:18:34.025 "flush": true, 00:18:34.025 "reset": true, 00:18:34.025 "nvme_admin": false, 00:18:34.025 "nvme_io": false, 00:18:34.025 "nvme_io_md": false, 00:18:34.025 "write_zeroes": true, 00:18:34.025 "zcopy": true, 00:18:34.025 "get_zone_info": false, 00:18:34.025 "zone_management": false, 00:18:34.025 "zone_append": false, 00:18:34.025 "compare": false, 00:18:34.025 "compare_and_write": false, 00:18:34.025 "abort": true, 00:18:34.026 "seek_hole": false, 00:18:34.026 "seek_data": false, 00:18:34.026 "copy": true, 00:18:34.026 "nvme_iov_md": false 00:18:34.026 }, 00:18:34.026 "memory_domains": [ 00:18:34.026 { 00:18:34.026 "dma_device_id": "system", 00:18:34.026 "dma_device_type": 1 00:18:34.026 }, 00:18:34.026 { 00:18:34.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.026 "dma_device_type": 2 00:18:34.026 } 00:18:34.026 ], 00:18:34.026 "driver_specific": {} 00:18:34.026 } 00:18:34.026 ] 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.026 [2024-12-09 22:59:49.803441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.026 [2024-12-09 22:59:49.803553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.026 [2024-12-09 22:59:49.803587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.026 [2024-12-09 22:59:49.805915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:34.026 [2024-12-09 22:59:49.805988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.026 "name": "Existed_Raid", 00:18:34.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.026 "strip_size_kb": 0, 00:18:34.026 "state": "configuring", 00:18:34.026 "raid_level": "raid1", 00:18:34.026 "superblock": false, 00:18:34.026 "num_base_bdevs": 4, 00:18:34.026 "num_base_bdevs_discovered": 3, 00:18:34.026 "num_base_bdevs_operational": 4, 00:18:34.026 "base_bdevs_list": [ 00:18:34.026 { 00:18:34.026 "name": "BaseBdev1", 00:18:34.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.026 "is_configured": false, 00:18:34.026 "data_offset": 0, 00:18:34.026 "data_size": 0 00:18:34.026 }, 00:18:34.026 { 00:18:34.026 "name": "BaseBdev2", 00:18:34.026 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:34.026 "is_configured": true, 00:18:34.026 "data_offset": 0, 00:18:34.026 "data_size": 65536 00:18:34.026 }, 00:18:34.026 { 00:18:34.026 "name": "BaseBdev3", 00:18:34.026 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:34.026 "is_configured": true, 00:18:34.026 "data_offset": 0, 00:18:34.026 "data_size": 65536 00:18:34.026 }, 00:18:34.026 { 00:18:34.026 "name": "BaseBdev4", 00:18:34.026 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:34.026 "is_configured": true, 00:18:34.026 "data_offset": 0, 00:18:34.026 "data_size": 65536 00:18:34.026 } 00:18:34.026 ] 00:18:34.026 }' 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.026 22:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.593 [2024-12-09 22:59:50.270667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.593 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.593 "name": "Existed_Raid", 00:18:34.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.593 "strip_size_kb": 0, 00:18:34.593 "state": "configuring", 00:18:34.593 "raid_level": "raid1", 00:18:34.593 "superblock": false, 00:18:34.593 "num_base_bdevs": 4, 00:18:34.594 "num_base_bdevs_discovered": 2, 00:18:34.594 "num_base_bdevs_operational": 4, 00:18:34.594 "base_bdevs_list": [ 00:18:34.594 { 00:18:34.594 "name": "BaseBdev1", 00:18:34.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.594 "is_configured": false, 00:18:34.594 "data_offset": 0, 00:18:34.594 "data_size": 0 00:18:34.594 }, 00:18:34.594 { 00:18:34.594 "name": null, 00:18:34.594 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:34.594 "is_configured": false, 00:18:34.594 "data_offset": 0, 00:18:34.594 "data_size": 65536 00:18:34.594 }, 00:18:34.594 { 00:18:34.594 "name": "BaseBdev3", 00:18:34.594 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:34.594 "is_configured": true, 00:18:34.594 "data_offset": 0, 00:18:34.594 "data_size": 65536 00:18:34.594 }, 00:18:34.594 { 00:18:34.594 "name": "BaseBdev4", 00:18:34.594 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:34.594 "is_configured": true, 00:18:34.594 "data_offset": 0, 00:18:34.594 "data_size": 65536 00:18:34.594 } 00:18:34.594 ] 00:18:34.594 }' 00:18:34.594 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.594 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.163 [2024-12-09 22:59:50.806317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.163 BaseBdev1 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.163 [ 00:18:35.163 { 00:18:35.163 "name": "BaseBdev1", 00:18:35.163 "aliases": [ 00:18:35.163 "741753e4-b78b-4690-8192-0e7b85a3f62f" 00:18:35.163 ], 00:18:35.163 "product_name": "Malloc disk", 00:18:35.163 "block_size": 512, 00:18:35.163 "num_blocks": 65536, 00:18:35.163 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:35.163 "assigned_rate_limits": { 00:18:35.163 "rw_ios_per_sec": 0, 00:18:35.163 "rw_mbytes_per_sec": 0, 00:18:35.163 "r_mbytes_per_sec": 0, 00:18:35.163 "w_mbytes_per_sec": 0 00:18:35.163 }, 00:18:35.163 "claimed": true, 00:18:35.163 "claim_type": "exclusive_write", 00:18:35.163 "zoned": false, 00:18:35.163 "supported_io_types": { 00:18:35.163 "read": true, 00:18:35.163 "write": true, 00:18:35.163 "unmap": true, 00:18:35.163 "flush": true, 00:18:35.163 "reset": true, 00:18:35.163 "nvme_admin": false, 00:18:35.163 "nvme_io": false, 00:18:35.163 "nvme_io_md": false, 00:18:35.163 "write_zeroes": true, 00:18:35.163 "zcopy": true, 00:18:35.163 "get_zone_info": false, 00:18:35.163 "zone_management": false, 00:18:35.163 "zone_append": false, 00:18:35.163 "compare": false, 00:18:35.163 "compare_and_write": false, 00:18:35.163 "abort": true, 00:18:35.163 "seek_hole": false, 00:18:35.163 "seek_data": false, 00:18:35.163 "copy": true, 00:18:35.163 "nvme_iov_md": false 00:18:35.163 }, 00:18:35.163 "memory_domains": [ 00:18:35.163 { 00:18:35.163 "dma_device_id": "system", 00:18:35.163 "dma_device_type": 1 00:18:35.163 }, 00:18:35.163 { 00:18:35.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.163 "dma_device_type": 2 00:18:35.163 } 00:18:35.163 ], 00:18:35.163 "driver_specific": {} 00:18:35.163 } 00:18:35.163 ] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.163 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.163 "name": "Existed_Raid", 00:18:35.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.163 "strip_size_kb": 0, 00:18:35.163 "state": "configuring", 00:18:35.163 "raid_level": "raid1", 00:18:35.163 "superblock": false, 00:18:35.163 "num_base_bdevs": 4, 00:18:35.163 "num_base_bdevs_discovered": 3, 00:18:35.163 "num_base_bdevs_operational": 4, 00:18:35.163 "base_bdevs_list": [ 00:18:35.163 { 00:18:35.163 "name": "BaseBdev1", 00:18:35.164 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:35.164 "is_configured": true, 00:18:35.164 "data_offset": 0, 00:18:35.164 "data_size": 65536 00:18:35.164 }, 00:18:35.164 { 00:18:35.164 "name": null, 00:18:35.164 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:35.164 "is_configured": false, 00:18:35.164 "data_offset": 0, 00:18:35.164 "data_size": 65536 00:18:35.164 }, 00:18:35.164 { 00:18:35.164 "name": "BaseBdev3", 00:18:35.164 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:35.164 "is_configured": true, 00:18:35.164 "data_offset": 0, 00:18:35.164 "data_size": 65536 00:18:35.164 }, 00:18:35.164 { 00:18:35.164 "name": "BaseBdev4", 00:18:35.164 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:35.164 "is_configured": true, 00:18:35.164 "data_offset": 0, 00:18:35.164 "data_size": 65536 00:18:35.164 } 00:18:35.164 ] 00:18:35.164 }' 00:18:35.164 22:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.164 22:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.423 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:35.423 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.423 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.423 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.681 [2024-12-09 22:59:51.313707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.681 "name": "Existed_Raid", 00:18:35.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.681 "strip_size_kb": 0, 00:18:35.681 "state": "configuring", 00:18:35.681 "raid_level": "raid1", 00:18:35.681 "superblock": false, 00:18:35.681 "num_base_bdevs": 4, 00:18:35.681 "num_base_bdevs_discovered": 2, 00:18:35.681 "num_base_bdevs_operational": 4, 00:18:35.681 "base_bdevs_list": [ 00:18:35.681 { 00:18:35.681 "name": "BaseBdev1", 00:18:35.681 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:35.681 "is_configured": true, 00:18:35.681 "data_offset": 0, 00:18:35.681 "data_size": 65536 00:18:35.681 }, 00:18:35.681 { 00:18:35.681 "name": null, 00:18:35.681 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:35.681 "is_configured": false, 00:18:35.681 "data_offset": 0, 00:18:35.681 "data_size": 65536 00:18:35.681 }, 00:18:35.681 { 00:18:35.681 "name": null, 00:18:35.681 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:35.681 "is_configured": false, 00:18:35.681 "data_offset": 0, 00:18:35.681 "data_size": 65536 00:18:35.681 }, 00:18:35.681 { 00:18:35.681 "name": "BaseBdev4", 00:18:35.681 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:35.681 "is_configured": true, 00:18:35.681 "data_offset": 0, 00:18:35.681 "data_size": 65536 00:18:35.681 } 00:18:35.681 ] 00:18:35.681 }' 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.681 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.939 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.939 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.939 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.939 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:35.939 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.198 [2024-12-09 22:59:51.816792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.198 "name": "Existed_Raid", 00:18:36.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.198 "strip_size_kb": 0, 00:18:36.198 "state": "configuring", 00:18:36.198 "raid_level": "raid1", 00:18:36.198 "superblock": false, 00:18:36.198 "num_base_bdevs": 4, 00:18:36.198 "num_base_bdevs_discovered": 3, 00:18:36.198 "num_base_bdevs_operational": 4, 00:18:36.198 "base_bdevs_list": [ 00:18:36.198 { 00:18:36.198 "name": "BaseBdev1", 00:18:36.198 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:36.198 "is_configured": true, 00:18:36.198 "data_offset": 0, 00:18:36.198 "data_size": 65536 00:18:36.198 }, 00:18:36.198 { 00:18:36.198 "name": null, 00:18:36.198 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:36.198 "is_configured": false, 00:18:36.198 "data_offset": 0, 00:18:36.198 "data_size": 65536 00:18:36.198 }, 00:18:36.198 { 00:18:36.198 "name": "BaseBdev3", 00:18:36.198 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:36.198 "is_configured": true, 00:18:36.198 "data_offset": 0, 00:18:36.198 "data_size": 65536 00:18:36.198 }, 00:18:36.198 { 00:18:36.198 "name": "BaseBdev4", 00:18:36.198 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:36.198 "is_configured": true, 00:18:36.198 "data_offset": 0, 00:18:36.198 "data_size": 65536 00:18:36.198 } 00:18:36.198 ] 00:18:36.198 }' 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.198 22:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.457 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.457 [2024-12-09 22:59:52.308675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.717 "name": "Existed_Raid", 00:18:36.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.717 "strip_size_kb": 0, 00:18:36.717 "state": "configuring", 00:18:36.717 "raid_level": "raid1", 00:18:36.717 "superblock": false, 00:18:36.717 "num_base_bdevs": 4, 00:18:36.717 "num_base_bdevs_discovered": 2, 00:18:36.717 "num_base_bdevs_operational": 4, 00:18:36.717 "base_bdevs_list": [ 00:18:36.717 { 00:18:36.717 "name": null, 00:18:36.717 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:36.717 "is_configured": false, 00:18:36.717 "data_offset": 0, 00:18:36.717 "data_size": 65536 00:18:36.717 }, 00:18:36.717 { 00:18:36.717 "name": null, 00:18:36.717 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:36.717 "is_configured": false, 00:18:36.717 "data_offset": 0, 00:18:36.717 "data_size": 65536 00:18:36.717 }, 00:18:36.717 { 00:18:36.717 "name": "BaseBdev3", 00:18:36.717 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:36.717 "is_configured": true, 00:18:36.717 "data_offset": 0, 00:18:36.717 "data_size": 65536 00:18:36.717 }, 00:18:36.717 { 00:18:36.717 "name": "BaseBdev4", 00:18:36.717 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:36.717 "is_configured": true, 00:18:36.717 "data_offset": 0, 00:18:36.717 "data_size": 65536 00:18:36.717 } 00:18:36.717 ] 00:18:36.717 }' 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.717 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.284 [2024-12-09 22:59:52.900681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.284 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.285 "name": "Existed_Raid", 00:18:37.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.285 "strip_size_kb": 0, 00:18:37.285 "state": "configuring", 00:18:37.285 "raid_level": "raid1", 00:18:37.285 "superblock": false, 00:18:37.285 "num_base_bdevs": 4, 00:18:37.285 "num_base_bdevs_discovered": 3, 00:18:37.285 "num_base_bdevs_operational": 4, 00:18:37.285 "base_bdevs_list": [ 00:18:37.285 { 00:18:37.285 "name": null, 00:18:37.285 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:37.285 "is_configured": false, 00:18:37.285 "data_offset": 0, 00:18:37.285 "data_size": 65536 00:18:37.285 }, 00:18:37.285 { 00:18:37.285 "name": "BaseBdev2", 00:18:37.285 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:37.285 "is_configured": true, 00:18:37.285 "data_offset": 0, 00:18:37.285 "data_size": 65536 00:18:37.285 }, 00:18:37.285 { 00:18:37.285 "name": "BaseBdev3", 00:18:37.285 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:37.285 "is_configured": true, 00:18:37.285 "data_offset": 0, 00:18:37.285 "data_size": 65536 00:18:37.285 }, 00:18:37.285 { 00:18:37.285 "name": "BaseBdev4", 00:18:37.285 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:37.285 "is_configured": true, 00:18:37.285 "data_offset": 0, 00:18:37.285 "data_size": 65536 00:18:37.285 } 00:18:37.285 ] 00:18:37.285 }' 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.285 22:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.543 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:37.543 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.543 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.543 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 741753e4-b78b-4690-8192-0e7b85a3f62f 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 [2024-12-09 22:59:53.509863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:37.802 [2024-12-09 22:59:53.510039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:37.802 [2024-12-09 22:59:53.510084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:37.802 [2024-12-09 22:59:53.510377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:37.802 [2024-12-09 22:59:53.510605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:37.802 [2024-12-09 22:59:53.510653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:37.802 [2024-12-09 22:59:53.510948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.802 NewBaseBdev 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 [ 00:18:37.802 { 00:18:37.802 "name": "NewBaseBdev", 00:18:37.802 "aliases": [ 00:18:37.802 "741753e4-b78b-4690-8192-0e7b85a3f62f" 00:18:37.802 ], 00:18:37.802 "product_name": "Malloc disk", 00:18:37.802 "block_size": 512, 00:18:37.802 "num_blocks": 65536, 00:18:37.802 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:37.802 "assigned_rate_limits": { 00:18:37.802 "rw_ios_per_sec": 0, 00:18:37.802 "rw_mbytes_per_sec": 0, 00:18:37.802 "r_mbytes_per_sec": 0, 00:18:37.802 "w_mbytes_per_sec": 0 00:18:37.802 }, 00:18:37.802 "claimed": true, 00:18:37.802 "claim_type": "exclusive_write", 00:18:37.802 "zoned": false, 00:18:37.802 "supported_io_types": { 00:18:37.802 "read": true, 00:18:37.802 "write": true, 00:18:37.802 "unmap": true, 00:18:37.802 "flush": true, 00:18:37.802 "reset": true, 00:18:37.802 "nvme_admin": false, 00:18:37.802 "nvme_io": false, 00:18:37.802 "nvme_io_md": false, 00:18:37.802 "write_zeroes": true, 00:18:37.802 "zcopy": true, 00:18:37.802 "get_zone_info": false, 00:18:37.802 "zone_management": false, 00:18:37.802 "zone_append": false, 00:18:37.802 "compare": false, 00:18:37.802 "compare_and_write": false, 00:18:37.802 "abort": true, 00:18:37.802 "seek_hole": false, 00:18:37.802 "seek_data": false, 00:18:37.802 "copy": true, 00:18:37.802 "nvme_iov_md": false 00:18:37.802 }, 00:18:37.802 "memory_domains": [ 00:18:37.802 { 00:18:37.802 "dma_device_id": "system", 00:18:37.802 "dma_device_type": 1 00:18:37.802 }, 00:18:37.802 { 00:18:37.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.802 "dma_device_type": 2 00:18:37.802 } 00:18:37.802 ], 00:18:37.802 "driver_specific": {} 00:18:37.802 } 00:18:37.802 ] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.802 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.802 "name": "Existed_Raid", 00:18:37.802 "uuid": "bc65c2f2-87b9-4500-add9-c49fed3ff2e9", 00:18:37.802 "strip_size_kb": 0, 00:18:37.802 "state": "online", 00:18:37.802 "raid_level": "raid1", 00:18:37.802 "superblock": false, 00:18:37.802 "num_base_bdevs": 4, 00:18:37.802 "num_base_bdevs_discovered": 4, 00:18:37.802 "num_base_bdevs_operational": 4, 00:18:37.802 "base_bdevs_list": [ 00:18:37.802 { 00:18:37.802 "name": "NewBaseBdev", 00:18:37.802 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:37.802 "is_configured": true, 00:18:37.802 "data_offset": 0, 00:18:37.803 "data_size": 65536 00:18:37.803 }, 00:18:37.803 { 00:18:37.803 "name": "BaseBdev2", 00:18:37.803 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:37.803 "is_configured": true, 00:18:37.803 "data_offset": 0, 00:18:37.803 "data_size": 65536 00:18:37.803 }, 00:18:37.803 { 00:18:37.803 "name": "BaseBdev3", 00:18:37.803 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:37.803 "is_configured": true, 00:18:37.803 "data_offset": 0, 00:18:37.803 "data_size": 65536 00:18:37.803 }, 00:18:37.803 { 00:18:37.803 "name": "BaseBdev4", 00:18:37.803 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:37.803 "is_configured": true, 00:18:37.803 "data_offset": 0, 00:18:37.803 "data_size": 65536 00:18:37.803 } 00:18:37.803 ] 00:18:37.803 }' 00:18:37.803 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.803 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.371 22:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.371 [2024-12-09 22:59:54.001474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.371 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.371 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.371 "name": "Existed_Raid", 00:18:38.371 "aliases": [ 00:18:38.371 "bc65c2f2-87b9-4500-add9-c49fed3ff2e9" 00:18:38.371 ], 00:18:38.371 "product_name": "Raid Volume", 00:18:38.371 "block_size": 512, 00:18:38.371 "num_blocks": 65536, 00:18:38.371 "uuid": "bc65c2f2-87b9-4500-add9-c49fed3ff2e9", 00:18:38.371 "assigned_rate_limits": { 00:18:38.371 "rw_ios_per_sec": 0, 00:18:38.371 "rw_mbytes_per_sec": 0, 00:18:38.371 "r_mbytes_per_sec": 0, 00:18:38.371 "w_mbytes_per_sec": 0 00:18:38.371 }, 00:18:38.371 "claimed": false, 00:18:38.371 "zoned": false, 00:18:38.371 "supported_io_types": { 00:18:38.371 "read": true, 00:18:38.371 "write": true, 00:18:38.371 "unmap": false, 00:18:38.371 "flush": false, 00:18:38.371 "reset": true, 00:18:38.371 "nvme_admin": false, 00:18:38.371 "nvme_io": false, 00:18:38.371 "nvme_io_md": false, 00:18:38.371 "write_zeroes": true, 00:18:38.371 "zcopy": false, 00:18:38.371 "get_zone_info": false, 00:18:38.371 "zone_management": false, 00:18:38.371 "zone_append": false, 00:18:38.371 "compare": false, 00:18:38.371 "compare_and_write": false, 00:18:38.371 "abort": false, 00:18:38.371 "seek_hole": false, 00:18:38.371 "seek_data": false, 00:18:38.371 "copy": false, 00:18:38.371 "nvme_iov_md": false 00:18:38.371 }, 00:18:38.371 "memory_domains": [ 00:18:38.371 { 00:18:38.371 "dma_device_id": "system", 00:18:38.371 "dma_device_type": 1 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.371 "dma_device_type": 2 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "system", 00:18:38.371 "dma_device_type": 1 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.371 "dma_device_type": 2 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "system", 00:18:38.371 "dma_device_type": 1 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.371 "dma_device_type": 2 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "system", 00:18:38.371 "dma_device_type": 1 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.371 "dma_device_type": 2 00:18:38.371 } 00:18:38.371 ], 00:18:38.371 "driver_specific": { 00:18:38.371 "raid": { 00:18:38.371 "uuid": "bc65c2f2-87b9-4500-add9-c49fed3ff2e9", 00:18:38.371 "strip_size_kb": 0, 00:18:38.371 "state": "online", 00:18:38.371 "raid_level": "raid1", 00:18:38.371 "superblock": false, 00:18:38.371 "num_base_bdevs": 4, 00:18:38.371 "num_base_bdevs_discovered": 4, 00:18:38.371 "num_base_bdevs_operational": 4, 00:18:38.371 "base_bdevs_list": [ 00:18:38.371 { 00:18:38.371 "name": "NewBaseBdev", 00:18:38.371 "uuid": "741753e4-b78b-4690-8192-0e7b85a3f62f", 00:18:38.371 "is_configured": true, 00:18:38.371 "data_offset": 0, 00:18:38.371 "data_size": 65536 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "name": "BaseBdev2", 00:18:38.371 "uuid": "431dcd7f-ea6d-42c8-8532-5644aa35dcfa", 00:18:38.371 "is_configured": true, 00:18:38.371 "data_offset": 0, 00:18:38.371 "data_size": 65536 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "name": "BaseBdev3", 00:18:38.371 "uuid": "6fdcf598-ebc5-4937-a929-d91f805322ba", 00:18:38.371 "is_configured": true, 00:18:38.371 "data_offset": 0, 00:18:38.371 "data_size": 65536 00:18:38.371 }, 00:18:38.371 { 00:18:38.371 "name": "BaseBdev4", 00:18:38.371 "uuid": "84869b4a-a839-4eba-b67a-021a8df4b025", 00:18:38.371 "is_configured": true, 00:18:38.371 "data_offset": 0, 00:18:38.371 "data_size": 65536 00:18:38.371 } 00:18:38.371 ] 00:18:38.371 } 00:18:38.371 } 00:18:38.371 }' 00:18:38.371 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.371 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:38.371 BaseBdev2 00:18:38.372 BaseBdev3 00:18:38.372 BaseBdev4' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.372 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.632 [2024-12-09 22:59:54.296636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.632 [2024-12-09 22:59:54.296762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.632 [2024-12-09 22:59:54.296904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.632 [2024-12-09 22:59:54.297220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.632 [2024-12-09 22:59:54.297237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73783 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73783 ']' 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73783 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73783 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.632 killing process with pid 73783 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73783' 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73783 00:18:38.632 [2024-12-09 22:59:54.342702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.632 22:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73783 00:18:39.200 [2024-12-09 22:59:54.756919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.582 22:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:40.582 00:18:40.582 real 0m12.325s 00:18:40.582 user 0m19.217s 00:18:40.582 sys 0m2.477s 00:18:40.582 ************************************ 00:18:40.582 END TEST raid_state_function_test 00:18:40.582 ************************************ 00:18:40.582 22:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.582 22:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.583 22:59:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:40.583 22:59:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:40.583 22:59:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.583 22:59:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.583 ************************************ 00:18:40.583 START TEST raid_state_function_test_sb 00:18:40.583 ************************************ 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74460 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74460' 00:18:40.583 Process raid pid: 74460 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74460 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74460 ']' 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.583 22:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.583 [2024-12-09 22:59:56.215963] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:18:40.583 [2024-12-09 22:59:56.216315] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.583 [2024-12-09 22:59:56.399945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.842 [2024-12-09 22:59:56.526119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.101 [2024-12-09 22:59:56.747498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.101 [2024-12-09 22:59:56.747655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.362 [2024-12-09 22:59:57.075173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.362 [2024-12-09 22:59:57.075250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.362 [2024-12-09 22:59:57.075264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.362 [2024-12-09 22:59:57.075278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.362 [2024-12-09 22:59:57.075287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.362 [2024-12-09 22:59:57.075300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.362 [2024-12-09 22:59:57.075309] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.362 [2024-12-09 22:59:57.075322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.362 "name": "Existed_Raid", 00:18:41.362 "uuid": "0ea82f6b-3903-4066-b5de-6c17961fa318", 00:18:41.362 "strip_size_kb": 0, 00:18:41.362 "state": "configuring", 00:18:41.362 "raid_level": "raid1", 00:18:41.362 "superblock": true, 00:18:41.362 "num_base_bdevs": 4, 00:18:41.362 "num_base_bdevs_discovered": 0, 00:18:41.362 "num_base_bdevs_operational": 4, 00:18:41.362 "base_bdevs_list": [ 00:18:41.362 { 00:18:41.362 "name": "BaseBdev1", 00:18:41.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.362 "is_configured": false, 00:18:41.362 "data_offset": 0, 00:18:41.362 "data_size": 0 00:18:41.362 }, 00:18:41.362 { 00:18:41.362 "name": "BaseBdev2", 00:18:41.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.362 "is_configured": false, 00:18:41.362 "data_offset": 0, 00:18:41.362 "data_size": 0 00:18:41.362 }, 00:18:41.362 { 00:18:41.362 "name": "BaseBdev3", 00:18:41.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.362 "is_configured": false, 00:18:41.362 "data_offset": 0, 00:18:41.362 "data_size": 0 00:18:41.362 }, 00:18:41.362 { 00:18:41.362 "name": "BaseBdev4", 00:18:41.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.362 "is_configured": false, 00:18:41.362 "data_offset": 0, 00:18:41.362 "data_size": 0 00:18:41.362 } 00:18:41.362 ] 00:18:41.362 }' 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.362 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 [2024-12-09 22:59:57.558261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.932 [2024-12-09 22:59:57.558313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 [2024-12-09 22:59:57.566273] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.932 [2024-12-09 22:59:57.566332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.932 [2024-12-09 22:59:57.566345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.932 [2024-12-09 22:59:57.566360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.932 [2024-12-09 22:59:57.566370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.932 [2024-12-09 22:59:57.566383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.932 [2024-12-09 22:59:57.566393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.932 [2024-12-09 22:59:57.566406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 [2024-12-09 22:59:57.612376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.932 BaseBdev1 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 [ 00:18:41.932 { 00:18:41.932 "name": "BaseBdev1", 00:18:41.932 "aliases": [ 00:18:41.932 "d5e052db-ef06-4ce1-98d8-d69fe3178198" 00:18:41.932 ], 00:18:41.932 "product_name": "Malloc disk", 00:18:41.932 "block_size": 512, 00:18:41.932 "num_blocks": 65536, 00:18:41.932 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:41.932 "assigned_rate_limits": { 00:18:41.932 "rw_ios_per_sec": 0, 00:18:41.932 "rw_mbytes_per_sec": 0, 00:18:41.932 "r_mbytes_per_sec": 0, 00:18:41.932 "w_mbytes_per_sec": 0 00:18:41.932 }, 00:18:41.932 "claimed": true, 00:18:41.932 "claim_type": "exclusive_write", 00:18:41.932 "zoned": false, 00:18:41.932 "supported_io_types": { 00:18:41.932 "read": true, 00:18:41.932 "write": true, 00:18:41.932 "unmap": true, 00:18:41.932 "flush": true, 00:18:41.932 "reset": true, 00:18:41.932 "nvme_admin": false, 00:18:41.932 "nvme_io": false, 00:18:41.932 "nvme_io_md": false, 00:18:41.932 "write_zeroes": true, 00:18:41.932 "zcopy": true, 00:18:41.932 "get_zone_info": false, 00:18:41.932 "zone_management": false, 00:18:41.932 "zone_append": false, 00:18:41.932 "compare": false, 00:18:41.932 "compare_and_write": false, 00:18:41.932 "abort": true, 00:18:41.932 "seek_hole": false, 00:18:41.932 "seek_data": false, 00:18:41.932 "copy": true, 00:18:41.932 "nvme_iov_md": false 00:18:41.932 }, 00:18:41.932 "memory_domains": [ 00:18:41.932 { 00:18:41.932 "dma_device_id": "system", 00:18:41.932 "dma_device_type": 1 00:18:41.932 }, 00:18:41.932 { 00:18:41.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.932 "dma_device_type": 2 00:18:41.932 } 00:18:41.932 ], 00:18:41.932 "driver_specific": {} 00:18:41.932 } 00:18:41.932 ] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.932 "name": "Existed_Raid", 00:18:41.932 "uuid": "c5d01822-8947-40f3-a8d5-34a32b2bb370", 00:18:41.932 "strip_size_kb": 0, 00:18:41.932 "state": "configuring", 00:18:41.932 "raid_level": "raid1", 00:18:41.932 "superblock": true, 00:18:41.932 "num_base_bdevs": 4, 00:18:41.932 "num_base_bdevs_discovered": 1, 00:18:41.932 "num_base_bdevs_operational": 4, 00:18:41.932 "base_bdevs_list": [ 00:18:41.932 { 00:18:41.932 "name": "BaseBdev1", 00:18:41.932 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:41.932 "is_configured": true, 00:18:41.932 "data_offset": 2048, 00:18:41.932 "data_size": 63488 00:18:41.932 }, 00:18:41.932 { 00:18:41.932 "name": "BaseBdev2", 00:18:41.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.932 "is_configured": false, 00:18:41.932 "data_offset": 0, 00:18:41.932 "data_size": 0 00:18:41.932 }, 00:18:41.932 { 00:18:41.932 "name": "BaseBdev3", 00:18:41.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.932 "is_configured": false, 00:18:41.932 "data_offset": 0, 00:18:41.932 "data_size": 0 00:18:41.932 }, 00:18:41.932 { 00:18:41.932 "name": "BaseBdev4", 00:18:41.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.932 "is_configured": false, 00:18:41.932 "data_offset": 0, 00:18:41.932 "data_size": 0 00:18:41.932 } 00:18:41.932 ] 00:18:41.932 }' 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.932 22:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 [2024-12-09 22:59:58.091649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.502 [2024-12-09 22:59:58.091797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 [2024-12-09 22:59:58.099742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.502 [2024-12-09 22:59:58.101923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.502 [2024-12-09 22:59:58.101977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.502 [2024-12-09 22:59:58.101990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.502 [2024-12-09 22:59:58.102003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.502 [2024-12-09 22:59:58.102012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:42.502 [2024-12-09 22:59:58.102023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.502 "name": "Existed_Raid", 00:18:42.502 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:42.502 "strip_size_kb": 0, 00:18:42.502 "state": "configuring", 00:18:42.502 "raid_level": "raid1", 00:18:42.502 "superblock": true, 00:18:42.502 "num_base_bdevs": 4, 00:18:42.502 "num_base_bdevs_discovered": 1, 00:18:42.502 "num_base_bdevs_operational": 4, 00:18:42.502 "base_bdevs_list": [ 00:18:42.502 { 00:18:42.502 "name": "BaseBdev1", 00:18:42.502 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:42.502 "is_configured": true, 00:18:42.502 "data_offset": 2048, 00:18:42.502 "data_size": 63488 00:18:42.502 }, 00:18:42.502 { 00:18:42.502 "name": "BaseBdev2", 00:18:42.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.502 "is_configured": false, 00:18:42.502 "data_offset": 0, 00:18:42.502 "data_size": 0 00:18:42.502 }, 00:18:42.502 { 00:18:42.502 "name": "BaseBdev3", 00:18:42.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.502 "is_configured": false, 00:18:42.502 "data_offset": 0, 00:18:42.502 "data_size": 0 00:18:42.502 }, 00:18:42.502 { 00:18:42.502 "name": "BaseBdev4", 00:18:42.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.502 "is_configured": false, 00:18:42.502 "data_offset": 0, 00:18:42.502 "data_size": 0 00:18:42.502 } 00:18:42.502 ] 00:18:42.502 }' 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.502 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.762 [2024-12-09 22:59:58.584172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.762 BaseBdev2 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.762 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.763 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.763 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.763 [ 00:18:42.763 { 00:18:42.763 "name": "BaseBdev2", 00:18:42.763 "aliases": [ 00:18:42.763 "0191ace3-1e5d-4469-a410-a101fd1e5bd6" 00:18:42.763 ], 00:18:42.763 "product_name": "Malloc disk", 00:18:42.763 "block_size": 512, 00:18:42.763 "num_blocks": 65536, 00:18:42.763 "uuid": "0191ace3-1e5d-4469-a410-a101fd1e5bd6", 00:18:42.763 "assigned_rate_limits": { 00:18:42.763 "rw_ios_per_sec": 0, 00:18:42.763 "rw_mbytes_per_sec": 0, 00:18:42.763 "r_mbytes_per_sec": 0, 00:18:42.763 "w_mbytes_per_sec": 0 00:18:42.763 }, 00:18:42.763 "claimed": true, 00:18:42.763 "claim_type": "exclusive_write", 00:18:42.763 "zoned": false, 00:18:42.763 "supported_io_types": { 00:18:42.763 "read": true, 00:18:42.763 "write": true, 00:18:42.763 "unmap": true, 00:18:43.022 "flush": true, 00:18:43.022 "reset": true, 00:18:43.022 "nvme_admin": false, 00:18:43.022 "nvme_io": false, 00:18:43.022 "nvme_io_md": false, 00:18:43.022 "write_zeroes": true, 00:18:43.022 "zcopy": true, 00:18:43.022 "get_zone_info": false, 00:18:43.022 "zone_management": false, 00:18:43.022 "zone_append": false, 00:18:43.022 "compare": false, 00:18:43.022 "compare_and_write": false, 00:18:43.022 "abort": true, 00:18:43.022 "seek_hole": false, 00:18:43.022 "seek_data": false, 00:18:43.022 "copy": true, 00:18:43.023 "nvme_iov_md": false 00:18:43.023 }, 00:18:43.023 "memory_domains": [ 00:18:43.023 { 00:18:43.023 "dma_device_id": "system", 00:18:43.023 "dma_device_type": 1 00:18:43.023 }, 00:18:43.023 { 00:18:43.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.023 "dma_device_type": 2 00:18:43.023 } 00:18:43.023 ], 00:18:43.023 "driver_specific": {} 00:18:43.023 } 00:18:43.023 ] 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.023 "name": "Existed_Raid", 00:18:43.023 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:43.023 "strip_size_kb": 0, 00:18:43.023 "state": "configuring", 00:18:43.023 "raid_level": "raid1", 00:18:43.023 "superblock": true, 00:18:43.023 "num_base_bdevs": 4, 00:18:43.023 "num_base_bdevs_discovered": 2, 00:18:43.023 "num_base_bdevs_operational": 4, 00:18:43.023 "base_bdevs_list": [ 00:18:43.023 { 00:18:43.023 "name": "BaseBdev1", 00:18:43.023 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:43.023 "is_configured": true, 00:18:43.023 "data_offset": 2048, 00:18:43.023 "data_size": 63488 00:18:43.023 }, 00:18:43.023 { 00:18:43.023 "name": "BaseBdev2", 00:18:43.023 "uuid": "0191ace3-1e5d-4469-a410-a101fd1e5bd6", 00:18:43.023 "is_configured": true, 00:18:43.023 "data_offset": 2048, 00:18:43.023 "data_size": 63488 00:18:43.023 }, 00:18:43.023 { 00:18:43.023 "name": "BaseBdev3", 00:18:43.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.023 "is_configured": false, 00:18:43.023 "data_offset": 0, 00:18:43.023 "data_size": 0 00:18:43.023 }, 00:18:43.023 { 00:18:43.023 "name": "BaseBdev4", 00:18:43.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.023 "is_configured": false, 00:18:43.023 "data_offset": 0, 00:18:43.023 "data_size": 0 00:18:43.023 } 00:18:43.023 ] 00:18:43.023 }' 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.023 22:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.283 [2024-12-09 22:59:59.122937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.283 BaseBdev3 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.283 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.542 [ 00:18:43.542 { 00:18:43.542 "name": "BaseBdev3", 00:18:43.542 "aliases": [ 00:18:43.542 "75d94702-85c3-4194-8667-a1264acc3c46" 00:18:43.542 ], 00:18:43.542 "product_name": "Malloc disk", 00:18:43.542 "block_size": 512, 00:18:43.542 "num_blocks": 65536, 00:18:43.542 "uuid": "75d94702-85c3-4194-8667-a1264acc3c46", 00:18:43.542 "assigned_rate_limits": { 00:18:43.542 "rw_ios_per_sec": 0, 00:18:43.542 "rw_mbytes_per_sec": 0, 00:18:43.542 "r_mbytes_per_sec": 0, 00:18:43.542 "w_mbytes_per_sec": 0 00:18:43.542 }, 00:18:43.542 "claimed": true, 00:18:43.542 "claim_type": "exclusive_write", 00:18:43.542 "zoned": false, 00:18:43.542 "supported_io_types": { 00:18:43.542 "read": true, 00:18:43.542 "write": true, 00:18:43.542 "unmap": true, 00:18:43.542 "flush": true, 00:18:43.542 "reset": true, 00:18:43.542 "nvme_admin": false, 00:18:43.542 "nvme_io": false, 00:18:43.542 "nvme_io_md": false, 00:18:43.542 "write_zeroes": true, 00:18:43.542 "zcopy": true, 00:18:43.542 "get_zone_info": false, 00:18:43.542 "zone_management": false, 00:18:43.542 "zone_append": false, 00:18:43.542 "compare": false, 00:18:43.542 "compare_and_write": false, 00:18:43.542 "abort": true, 00:18:43.542 "seek_hole": false, 00:18:43.542 "seek_data": false, 00:18:43.542 "copy": true, 00:18:43.542 "nvme_iov_md": false 00:18:43.542 }, 00:18:43.542 "memory_domains": [ 00:18:43.542 { 00:18:43.542 "dma_device_id": "system", 00:18:43.542 "dma_device_type": 1 00:18:43.542 }, 00:18:43.542 { 00:18:43.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.542 "dma_device_type": 2 00:18:43.542 } 00:18:43.542 ], 00:18:43.542 "driver_specific": {} 00:18:43.542 } 00:18:43.542 ] 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.542 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.542 "name": "Existed_Raid", 00:18:43.542 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:43.542 "strip_size_kb": 0, 00:18:43.542 "state": "configuring", 00:18:43.542 "raid_level": "raid1", 00:18:43.542 "superblock": true, 00:18:43.542 "num_base_bdevs": 4, 00:18:43.542 "num_base_bdevs_discovered": 3, 00:18:43.542 "num_base_bdevs_operational": 4, 00:18:43.542 "base_bdevs_list": [ 00:18:43.542 { 00:18:43.542 "name": "BaseBdev1", 00:18:43.542 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:43.543 "is_configured": true, 00:18:43.543 "data_offset": 2048, 00:18:43.543 "data_size": 63488 00:18:43.543 }, 00:18:43.543 { 00:18:43.543 "name": "BaseBdev2", 00:18:43.543 "uuid": "0191ace3-1e5d-4469-a410-a101fd1e5bd6", 00:18:43.543 "is_configured": true, 00:18:43.543 "data_offset": 2048, 00:18:43.543 "data_size": 63488 00:18:43.543 }, 00:18:43.543 { 00:18:43.543 "name": "BaseBdev3", 00:18:43.543 "uuid": "75d94702-85c3-4194-8667-a1264acc3c46", 00:18:43.543 "is_configured": true, 00:18:43.543 "data_offset": 2048, 00:18:43.543 "data_size": 63488 00:18:43.543 }, 00:18:43.543 { 00:18:43.543 "name": "BaseBdev4", 00:18:43.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.543 "is_configured": false, 00:18:43.543 "data_offset": 0, 00:18:43.543 "data_size": 0 00:18:43.543 } 00:18:43.543 ] 00:18:43.543 }' 00:18:43.543 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.543 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.802 [2024-12-09 22:59:59.642427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.802 [2024-12-09 22:59:59.642852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:43.802 [2024-12-09 22:59:59.642881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.802 [2024-12-09 22:59:59.643216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.802 BaseBdev4 00:18:43.802 [2024-12-09 22:59:59.643425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:43.802 [2024-12-09 22:59:59.643444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:43.802 [2024-12-09 22:59:59.643649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.802 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.062 [ 00:18:44.062 { 00:18:44.062 "name": "BaseBdev4", 00:18:44.062 "aliases": [ 00:18:44.062 "3ab1ff35-19df-4ed3-8739-cf62f41a0ff2" 00:18:44.062 ], 00:18:44.062 "product_name": "Malloc disk", 00:18:44.062 "block_size": 512, 00:18:44.062 "num_blocks": 65536, 00:18:44.062 "uuid": "3ab1ff35-19df-4ed3-8739-cf62f41a0ff2", 00:18:44.062 "assigned_rate_limits": { 00:18:44.062 "rw_ios_per_sec": 0, 00:18:44.062 "rw_mbytes_per_sec": 0, 00:18:44.062 "r_mbytes_per_sec": 0, 00:18:44.062 "w_mbytes_per_sec": 0 00:18:44.062 }, 00:18:44.062 "claimed": true, 00:18:44.062 "claim_type": "exclusive_write", 00:18:44.062 "zoned": false, 00:18:44.062 "supported_io_types": { 00:18:44.062 "read": true, 00:18:44.062 "write": true, 00:18:44.062 "unmap": true, 00:18:44.062 "flush": true, 00:18:44.062 "reset": true, 00:18:44.062 "nvme_admin": false, 00:18:44.062 "nvme_io": false, 00:18:44.062 "nvme_io_md": false, 00:18:44.062 "write_zeroes": true, 00:18:44.062 "zcopy": true, 00:18:44.062 "get_zone_info": false, 00:18:44.062 "zone_management": false, 00:18:44.062 "zone_append": false, 00:18:44.062 "compare": false, 00:18:44.062 "compare_and_write": false, 00:18:44.062 "abort": true, 00:18:44.062 "seek_hole": false, 00:18:44.062 "seek_data": false, 00:18:44.062 "copy": true, 00:18:44.062 "nvme_iov_md": false 00:18:44.062 }, 00:18:44.062 "memory_domains": [ 00:18:44.062 { 00:18:44.062 "dma_device_id": "system", 00:18:44.062 "dma_device_type": 1 00:18:44.062 }, 00:18:44.062 { 00:18:44.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.062 "dma_device_type": 2 00:18:44.062 } 00:18:44.062 ], 00:18:44.062 "driver_specific": {} 00:18:44.062 } 00:18:44.062 ] 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.062 "name": "Existed_Raid", 00:18:44.062 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:44.062 "strip_size_kb": 0, 00:18:44.062 "state": "online", 00:18:44.062 "raid_level": "raid1", 00:18:44.062 "superblock": true, 00:18:44.062 "num_base_bdevs": 4, 00:18:44.062 "num_base_bdevs_discovered": 4, 00:18:44.062 "num_base_bdevs_operational": 4, 00:18:44.062 "base_bdevs_list": [ 00:18:44.062 { 00:18:44.062 "name": "BaseBdev1", 00:18:44.062 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:44.062 "is_configured": true, 00:18:44.062 "data_offset": 2048, 00:18:44.062 "data_size": 63488 00:18:44.062 }, 00:18:44.062 { 00:18:44.062 "name": "BaseBdev2", 00:18:44.062 "uuid": "0191ace3-1e5d-4469-a410-a101fd1e5bd6", 00:18:44.062 "is_configured": true, 00:18:44.062 "data_offset": 2048, 00:18:44.062 "data_size": 63488 00:18:44.062 }, 00:18:44.062 { 00:18:44.062 "name": "BaseBdev3", 00:18:44.062 "uuid": "75d94702-85c3-4194-8667-a1264acc3c46", 00:18:44.062 "is_configured": true, 00:18:44.062 "data_offset": 2048, 00:18:44.062 "data_size": 63488 00:18:44.062 }, 00:18:44.062 { 00:18:44.062 "name": "BaseBdev4", 00:18:44.062 "uuid": "3ab1ff35-19df-4ed3-8739-cf62f41a0ff2", 00:18:44.062 "is_configured": true, 00:18:44.062 "data_offset": 2048, 00:18:44.062 "data_size": 63488 00:18:44.062 } 00:18:44.062 ] 00:18:44.062 }' 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.062 22:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.322 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:44.322 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:44.322 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:44.322 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:44.322 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:44.322 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:44.581 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.582 [2024-12-09 23:00:00.186020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.582 "name": "Existed_Raid", 00:18:44.582 "aliases": [ 00:18:44.582 "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d" 00:18:44.582 ], 00:18:44.582 "product_name": "Raid Volume", 00:18:44.582 "block_size": 512, 00:18:44.582 "num_blocks": 63488, 00:18:44.582 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:44.582 "assigned_rate_limits": { 00:18:44.582 "rw_ios_per_sec": 0, 00:18:44.582 "rw_mbytes_per_sec": 0, 00:18:44.582 "r_mbytes_per_sec": 0, 00:18:44.582 "w_mbytes_per_sec": 0 00:18:44.582 }, 00:18:44.582 "claimed": false, 00:18:44.582 "zoned": false, 00:18:44.582 "supported_io_types": { 00:18:44.582 "read": true, 00:18:44.582 "write": true, 00:18:44.582 "unmap": false, 00:18:44.582 "flush": false, 00:18:44.582 "reset": true, 00:18:44.582 "nvme_admin": false, 00:18:44.582 "nvme_io": false, 00:18:44.582 "nvme_io_md": false, 00:18:44.582 "write_zeroes": true, 00:18:44.582 "zcopy": false, 00:18:44.582 "get_zone_info": false, 00:18:44.582 "zone_management": false, 00:18:44.582 "zone_append": false, 00:18:44.582 "compare": false, 00:18:44.582 "compare_and_write": false, 00:18:44.582 "abort": false, 00:18:44.582 "seek_hole": false, 00:18:44.582 "seek_data": false, 00:18:44.582 "copy": false, 00:18:44.582 "nvme_iov_md": false 00:18:44.582 }, 00:18:44.582 "memory_domains": [ 00:18:44.582 { 00:18:44.582 "dma_device_id": "system", 00:18:44.582 "dma_device_type": 1 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.582 "dma_device_type": 2 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "system", 00:18:44.582 "dma_device_type": 1 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.582 "dma_device_type": 2 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "system", 00:18:44.582 "dma_device_type": 1 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.582 "dma_device_type": 2 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "system", 00:18:44.582 "dma_device_type": 1 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.582 "dma_device_type": 2 00:18:44.582 } 00:18:44.582 ], 00:18:44.582 "driver_specific": { 00:18:44.582 "raid": { 00:18:44.582 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:44.582 "strip_size_kb": 0, 00:18:44.582 "state": "online", 00:18:44.582 "raid_level": "raid1", 00:18:44.582 "superblock": true, 00:18:44.582 "num_base_bdevs": 4, 00:18:44.582 "num_base_bdevs_discovered": 4, 00:18:44.582 "num_base_bdevs_operational": 4, 00:18:44.582 "base_bdevs_list": [ 00:18:44.582 { 00:18:44.582 "name": "BaseBdev1", 00:18:44.582 "uuid": "d5e052db-ef06-4ce1-98d8-d69fe3178198", 00:18:44.582 "is_configured": true, 00:18:44.582 "data_offset": 2048, 00:18:44.582 "data_size": 63488 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "name": "BaseBdev2", 00:18:44.582 "uuid": "0191ace3-1e5d-4469-a410-a101fd1e5bd6", 00:18:44.582 "is_configured": true, 00:18:44.582 "data_offset": 2048, 00:18:44.582 "data_size": 63488 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "name": "BaseBdev3", 00:18:44.582 "uuid": "75d94702-85c3-4194-8667-a1264acc3c46", 00:18:44.582 "is_configured": true, 00:18:44.582 "data_offset": 2048, 00:18:44.582 "data_size": 63488 00:18:44.582 }, 00:18:44.582 { 00:18:44.582 "name": "BaseBdev4", 00:18:44.582 "uuid": "3ab1ff35-19df-4ed3-8739-cf62f41a0ff2", 00:18:44.582 "is_configured": true, 00:18:44.582 "data_offset": 2048, 00:18:44.582 "data_size": 63488 00:18:44.582 } 00:18:44.582 ] 00:18:44.582 } 00:18:44.582 } 00:18:44.582 }' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:44.582 BaseBdev2 00:18:44.582 BaseBdev3 00:18:44.582 BaseBdev4' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.582 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 [2024-12-09 23:00:00.521172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.842 "name": "Existed_Raid", 00:18:44.842 "uuid": "c2696967-c6cc-4ee9-b20b-ad7dc5750a8d", 00:18:44.842 "strip_size_kb": 0, 00:18:44.842 "state": "online", 00:18:44.842 "raid_level": "raid1", 00:18:44.842 "superblock": true, 00:18:44.842 "num_base_bdevs": 4, 00:18:44.842 "num_base_bdevs_discovered": 3, 00:18:44.842 "num_base_bdevs_operational": 3, 00:18:44.842 "base_bdevs_list": [ 00:18:44.842 { 00:18:44.842 "name": null, 00:18:44.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.842 "is_configured": false, 00:18:44.842 "data_offset": 0, 00:18:44.842 "data_size": 63488 00:18:44.842 }, 00:18:44.842 { 00:18:44.842 "name": "BaseBdev2", 00:18:44.842 "uuid": "0191ace3-1e5d-4469-a410-a101fd1e5bd6", 00:18:44.842 "is_configured": true, 00:18:44.842 "data_offset": 2048, 00:18:44.842 "data_size": 63488 00:18:44.842 }, 00:18:44.842 { 00:18:44.842 "name": "BaseBdev3", 00:18:44.842 "uuid": "75d94702-85c3-4194-8667-a1264acc3c46", 00:18:44.842 "is_configured": true, 00:18:44.842 "data_offset": 2048, 00:18:44.842 "data_size": 63488 00:18:44.842 }, 00:18:44.842 { 00:18:44.842 "name": "BaseBdev4", 00:18:44.842 "uuid": "3ab1ff35-19df-4ed3-8739-cf62f41a0ff2", 00:18:44.842 "is_configured": true, 00:18:44.842 "data_offset": 2048, 00:18:44.842 "data_size": 63488 00:18:44.842 } 00:18:44.842 ] 00:18:44.842 }' 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.842 23:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.412 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.412 [2024-12-09 23:00:01.201695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:45.670 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.671 [2024-12-09 23:00:01.378059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.671 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.929 [2024-12-09 23:00:01.554914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:45.929 [2024-12-09 23:00:01.555114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.929 [2024-12-09 23:00:01.670947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.929 [2024-12-09 23:00:01.671119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.929 [2024-12-09 23:00:01.671185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.929 BaseBdev2 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.929 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 [ 00:18:46.194 { 00:18:46.194 "name": "BaseBdev2", 00:18:46.194 "aliases": [ 00:18:46.194 "a171495a-ea06-47f5-b31a-974af007ec9f" 00:18:46.194 ], 00:18:46.194 "product_name": "Malloc disk", 00:18:46.194 "block_size": 512, 00:18:46.194 "num_blocks": 65536, 00:18:46.194 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:46.194 "assigned_rate_limits": { 00:18:46.194 "rw_ios_per_sec": 0, 00:18:46.194 "rw_mbytes_per_sec": 0, 00:18:46.194 "r_mbytes_per_sec": 0, 00:18:46.194 "w_mbytes_per_sec": 0 00:18:46.194 }, 00:18:46.194 "claimed": false, 00:18:46.194 "zoned": false, 00:18:46.194 "supported_io_types": { 00:18:46.194 "read": true, 00:18:46.194 "write": true, 00:18:46.194 "unmap": true, 00:18:46.194 "flush": true, 00:18:46.194 "reset": true, 00:18:46.194 "nvme_admin": false, 00:18:46.194 "nvme_io": false, 00:18:46.194 "nvme_io_md": false, 00:18:46.194 "write_zeroes": true, 00:18:46.194 "zcopy": true, 00:18:46.194 "get_zone_info": false, 00:18:46.194 "zone_management": false, 00:18:46.194 "zone_append": false, 00:18:46.194 "compare": false, 00:18:46.194 "compare_and_write": false, 00:18:46.194 "abort": true, 00:18:46.194 "seek_hole": false, 00:18:46.194 "seek_data": false, 00:18:46.194 "copy": true, 00:18:46.194 "nvme_iov_md": false 00:18:46.194 }, 00:18:46.194 "memory_domains": [ 00:18:46.194 { 00:18:46.194 "dma_device_id": "system", 00:18:46.194 "dma_device_type": 1 00:18:46.194 }, 00:18:46.194 { 00:18:46.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.194 "dma_device_type": 2 00:18:46.194 } 00:18:46.194 ], 00:18:46.194 "driver_specific": {} 00:18:46.194 } 00:18:46.194 ] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 BaseBdev3 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 [ 00:18:46.194 { 00:18:46.194 "name": "BaseBdev3", 00:18:46.194 "aliases": [ 00:18:46.194 "07d4be84-ea29-4f36-be32-6e71601bc5a7" 00:18:46.194 ], 00:18:46.194 "product_name": "Malloc disk", 00:18:46.194 "block_size": 512, 00:18:46.194 "num_blocks": 65536, 00:18:46.194 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:46.194 "assigned_rate_limits": { 00:18:46.194 "rw_ios_per_sec": 0, 00:18:46.194 "rw_mbytes_per_sec": 0, 00:18:46.194 "r_mbytes_per_sec": 0, 00:18:46.194 "w_mbytes_per_sec": 0 00:18:46.194 }, 00:18:46.194 "claimed": false, 00:18:46.194 "zoned": false, 00:18:46.194 "supported_io_types": { 00:18:46.194 "read": true, 00:18:46.194 "write": true, 00:18:46.194 "unmap": true, 00:18:46.194 "flush": true, 00:18:46.194 "reset": true, 00:18:46.194 "nvme_admin": false, 00:18:46.194 "nvme_io": false, 00:18:46.194 "nvme_io_md": false, 00:18:46.194 "write_zeroes": true, 00:18:46.194 "zcopy": true, 00:18:46.194 "get_zone_info": false, 00:18:46.194 "zone_management": false, 00:18:46.194 "zone_append": false, 00:18:46.194 "compare": false, 00:18:46.194 "compare_and_write": false, 00:18:46.194 "abort": true, 00:18:46.194 "seek_hole": false, 00:18:46.194 "seek_data": false, 00:18:46.194 "copy": true, 00:18:46.194 "nvme_iov_md": false 00:18:46.194 }, 00:18:46.194 "memory_domains": [ 00:18:46.194 { 00:18:46.194 "dma_device_id": "system", 00:18:46.194 "dma_device_type": 1 00:18:46.194 }, 00:18:46.194 { 00:18:46.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.194 "dma_device_type": 2 00:18:46.194 } 00:18:46.194 ], 00:18:46.194 "driver_specific": {} 00:18:46.194 } 00:18:46.194 ] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 BaseBdev4 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.194 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 [ 00:18:46.194 { 00:18:46.194 "name": "BaseBdev4", 00:18:46.194 "aliases": [ 00:18:46.194 "60ef19aa-0843-42e2-826a-0df25c7a2e4e" 00:18:46.194 ], 00:18:46.194 "product_name": "Malloc disk", 00:18:46.194 "block_size": 512, 00:18:46.194 "num_blocks": 65536, 00:18:46.194 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:46.195 "assigned_rate_limits": { 00:18:46.195 "rw_ios_per_sec": 0, 00:18:46.195 "rw_mbytes_per_sec": 0, 00:18:46.195 "r_mbytes_per_sec": 0, 00:18:46.195 "w_mbytes_per_sec": 0 00:18:46.195 }, 00:18:46.195 "claimed": false, 00:18:46.195 "zoned": false, 00:18:46.195 "supported_io_types": { 00:18:46.195 "read": true, 00:18:46.195 "write": true, 00:18:46.195 "unmap": true, 00:18:46.195 "flush": true, 00:18:46.195 "reset": true, 00:18:46.195 "nvme_admin": false, 00:18:46.195 "nvme_io": false, 00:18:46.195 "nvme_io_md": false, 00:18:46.195 "write_zeroes": true, 00:18:46.195 "zcopy": true, 00:18:46.195 "get_zone_info": false, 00:18:46.195 "zone_management": false, 00:18:46.195 "zone_append": false, 00:18:46.195 "compare": false, 00:18:46.195 "compare_and_write": false, 00:18:46.195 "abort": true, 00:18:46.195 "seek_hole": false, 00:18:46.195 "seek_data": false, 00:18:46.195 "copy": true, 00:18:46.195 "nvme_iov_md": false 00:18:46.195 }, 00:18:46.195 "memory_domains": [ 00:18:46.195 { 00:18:46.195 "dma_device_id": "system", 00:18:46.195 "dma_device_type": 1 00:18:46.195 }, 00:18:46.195 { 00:18:46.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.195 "dma_device_type": 2 00:18:46.195 } 00:18:46.195 ], 00:18:46.195 "driver_specific": {} 00:18:46.195 } 00:18:46.195 ] 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 [2024-12-09 23:00:01.994022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.195 [2024-12-09 23:00:01.994149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.195 [2024-12-09 23:00:01.994217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.195 [2024-12-09 23:00:01.996454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.195 [2024-12-09 23:00:01.996597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.195 23:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.457 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.457 "name": "Existed_Raid", 00:18:46.457 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:46.457 "strip_size_kb": 0, 00:18:46.457 "state": "configuring", 00:18:46.457 "raid_level": "raid1", 00:18:46.457 "superblock": true, 00:18:46.457 "num_base_bdevs": 4, 00:18:46.457 "num_base_bdevs_discovered": 3, 00:18:46.457 "num_base_bdevs_operational": 4, 00:18:46.457 "base_bdevs_list": [ 00:18:46.457 { 00:18:46.457 "name": "BaseBdev1", 00:18:46.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.457 "is_configured": false, 00:18:46.457 "data_offset": 0, 00:18:46.457 "data_size": 0 00:18:46.457 }, 00:18:46.457 { 00:18:46.457 "name": "BaseBdev2", 00:18:46.457 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:46.457 "is_configured": true, 00:18:46.457 "data_offset": 2048, 00:18:46.457 "data_size": 63488 00:18:46.457 }, 00:18:46.457 { 00:18:46.457 "name": "BaseBdev3", 00:18:46.457 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:46.457 "is_configured": true, 00:18:46.457 "data_offset": 2048, 00:18:46.457 "data_size": 63488 00:18:46.457 }, 00:18:46.457 { 00:18:46.457 "name": "BaseBdev4", 00:18:46.457 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:46.457 "is_configured": true, 00:18:46.457 "data_offset": 2048, 00:18:46.457 "data_size": 63488 00:18:46.457 } 00:18:46.457 ] 00:18:46.457 }' 00:18:46.457 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.457 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.715 [2024-12-09 23:00:02.457547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.715 "name": "Existed_Raid", 00:18:46.715 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:46.715 "strip_size_kb": 0, 00:18:46.715 "state": "configuring", 00:18:46.715 "raid_level": "raid1", 00:18:46.715 "superblock": true, 00:18:46.715 "num_base_bdevs": 4, 00:18:46.715 "num_base_bdevs_discovered": 2, 00:18:46.715 "num_base_bdevs_operational": 4, 00:18:46.715 "base_bdevs_list": [ 00:18:46.715 { 00:18:46.715 "name": "BaseBdev1", 00:18:46.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.715 "is_configured": false, 00:18:46.715 "data_offset": 0, 00:18:46.715 "data_size": 0 00:18:46.715 }, 00:18:46.715 { 00:18:46.715 "name": null, 00:18:46.715 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:46.715 "is_configured": false, 00:18:46.715 "data_offset": 0, 00:18:46.715 "data_size": 63488 00:18:46.715 }, 00:18:46.715 { 00:18:46.715 "name": "BaseBdev3", 00:18:46.715 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:46.715 "is_configured": true, 00:18:46.715 "data_offset": 2048, 00:18:46.715 "data_size": 63488 00:18:46.715 }, 00:18:46.715 { 00:18:46.715 "name": "BaseBdev4", 00:18:46.715 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:46.715 "is_configured": true, 00:18:46.715 "data_offset": 2048, 00:18:46.715 "data_size": 63488 00:18:46.715 } 00:18:46.715 ] 00:18:46.715 }' 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.715 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.278 23:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.278 [2024-12-09 23:00:03.041224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.278 BaseBdev1 00:18:47.278 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.278 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 [ 00:18:47.279 { 00:18:47.279 "name": "BaseBdev1", 00:18:47.279 "aliases": [ 00:18:47.279 "864625e7-47fd-49b6-b5b0-5953ad346baf" 00:18:47.279 ], 00:18:47.279 "product_name": "Malloc disk", 00:18:47.279 "block_size": 512, 00:18:47.279 "num_blocks": 65536, 00:18:47.279 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:47.279 "assigned_rate_limits": { 00:18:47.279 "rw_ios_per_sec": 0, 00:18:47.279 "rw_mbytes_per_sec": 0, 00:18:47.279 "r_mbytes_per_sec": 0, 00:18:47.279 "w_mbytes_per_sec": 0 00:18:47.279 }, 00:18:47.279 "claimed": true, 00:18:47.279 "claim_type": "exclusive_write", 00:18:47.279 "zoned": false, 00:18:47.279 "supported_io_types": { 00:18:47.279 "read": true, 00:18:47.279 "write": true, 00:18:47.279 "unmap": true, 00:18:47.279 "flush": true, 00:18:47.279 "reset": true, 00:18:47.279 "nvme_admin": false, 00:18:47.279 "nvme_io": false, 00:18:47.279 "nvme_io_md": false, 00:18:47.279 "write_zeroes": true, 00:18:47.279 "zcopy": true, 00:18:47.279 "get_zone_info": false, 00:18:47.279 "zone_management": false, 00:18:47.279 "zone_append": false, 00:18:47.279 "compare": false, 00:18:47.279 "compare_and_write": false, 00:18:47.279 "abort": true, 00:18:47.279 "seek_hole": false, 00:18:47.279 "seek_data": false, 00:18:47.279 "copy": true, 00:18:47.279 "nvme_iov_md": false 00:18:47.279 }, 00:18:47.279 "memory_domains": [ 00:18:47.279 { 00:18:47.279 "dma_device_id": "system", 00:18:47.279 "dma_device_type": 1 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.279 "dma_device_type": 2 00:18:47.279 } 00:18:47.279 ], 00:18:47.279 "driver_specific": {} 00:18:47.279 } 00:18:47.279 ] 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.537 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.537 "name": "Existed_Raid", 00:18:47.537 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:47.537 "strip_size_kb": 0, 00:18:47.537 "state": "configuring", 00:18:47.537 "raid_level": "raid1", 00:18:47.537 "superblock": true, 00:18:47.537 "num_base_bdevs": 4, 00:18:47.537 "num_base_bdevs_discovered": 3, 00:18:47.537 "num_base_bdevs_operational": 4, 00:18:47.538 "base_bdevs_list": [ 00:18:47.538 { 00:18:47.538 "name": "BaseBdev1", 00:18:47.538 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:47.538 "is_configured": true, 00:18:47.538 "data_offset": 2048, 00:18:47.538 "data_size": 63488 00:18:47.538 }, 00:18:47.538 { 00:18:47.538 "name": null, 00:18:47.538 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:47.538 "is_configured": false, 00:18:47.538 "data_offset": 0, 00:18:47.538 "data_size": 63488 00:18:47.538 }, 00:18:47.538 { 00:18:47.538 "name": "BaseBdev3", 00:18:47.538 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:47.538 "is_configured": true, 00:18:47.538 "data_offset": 2048, 00:18:47.538 "data_size": 63488 00:18:47.538 }, 00:18:47.538 { 00:18:47.538 "name": "BaseBdev4", 00:18:47.538 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:47.538 "is_configured": true, 00:18:47.538 "data_offset": 2048, 00:18:47.538 "data_size": 63488 00:18:47.538 } 00:18:47.538 ] 00:18:47.538 }' 00:18:47.538 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.538 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.797 [2024-12-09 23:00:03.620677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.797 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.056 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.056 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.056 "name": "Existed_Raid", 00:18:48.056 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:48.056 "strip_size_kb": 0, 00:18:48.056 "state": "configuring", 00:18:48.056 "raid_level": "raid1", 00:18:48.056 "superblock": true, 00:18:48.056 "num_base_bdevs": 4, 00:18:48.056 "num_base_bdevs_discovered": 2, 00:18:48.056 "num_base_bdevs_operational": 4, 00:18:48.056 "base_bdevs_list": [ 00:18:48.056 { 00:18:48.056 "name": "BaseBdev1", 00:18:48.056 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:48.056 "is_configured": true, 00:18:48.056 "data_offset": 2048, 00:18:48.056 "data_size": 63488 00:18:48.056 }, 00:18:48.056 { 00:18:48.056 "name": null, 00:18:48.056 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:48.056 "is_configured": false, 00:18:48.056 "data_offset": 0, 00:18:48.056 "data_size": 63488 00:18:48.056 }, 00:18:48.056 { 00:18:48.056 "name": null, 00:18:48.056 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:48.056 "is_configured": false, 00:18:48.056 "data_offset": 0, 00:18:48.056 "data_size": 63488 00:18:48.056 }, 00:18:48.056 { 00:18:48.056 "name": "BaseBdev4", 00:18:48.056 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:48.056 "is_configured": true, 00:18:48.056 "data_offset": 2048, 00:18:48.056 "data_size": 63488 00:18:48.056 } 00:18:48.056 ] 00:18:48.056 }' 00:18:48.056 23:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.056 23:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 [2024-12-09 23:00:04.144672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.574 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.575 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.575 "name": "Existed_Raid", 00:18:48.575 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:48.575 "strip_size_kb": 0, 00:18:48.575 "state": "configuring", 00:18:48.575 "raid_level": "raid1", 00:18:48.575 "superblock": true, 00:18:48.575 "num_base_bdevs": 4, 00:18:48.575 "num_base_bdevs_discovered": 3, 00:18:48.575 "num_base_bdevs_operational": 4, 00:18:48.575 "base_bdevs_list": [ 00:18:48.575 { 00:18:48.575 "name": "BaseBdev1", 00:18:48.575 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:48.575 "is_configured": true, 00:18:48.575 "data_offset": 2048, 00:18:48.575 "data_size": 63488 00:18:48.575 }, 00:18:48.575 { 00:18:48.575 "name": null, 00:18:48.575 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:48.575 "is_configured": false, 00:18:48.575 "data_offset": 0, 00:18:48.575 "data_size": 63488 00:18:48.575 }, 00:18:48.575 { 00:18:48.575 "name": "BaseBdev3", 00:18:48.575 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:48.575 "is_configured": true, 00:18:48.575 "data_offset": 2048, 00:18:48.575 "data_size": 63488 00:18:48.575 }, 00:18:48.575 { 00:18:48.575 "name": "BaseBdev4", 00:18:48.575 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:48.575 "is_configured": true, 00:18:48.575 "data_offset": 2048, 00:18:48.575 "data_size": 63488 00:18:48.575 } 00:18:48.575 ] 00:18:48.575 }' 00:18:48.575 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.575 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.833 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.834 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.834 [2024-12-09 23:00:04.672725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.093 "name": "Existed_Raid", 00:18:49.093 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:49.093 "strip_size_kb": 0, 00:18:49.093 "state": "configuring", 00:18:49.093 "raid_level": "raid1", 00:18:49.093 "superblock": true, 00:18:49.093 "num_base_bdevs": 4, 00:18:49.093 "num_base_bdevs_discovered": 2, 00:18:49.093 "num_base_bdevs_operational": 4, 00:18:49.093 "base_bdevs_list": [ 00:18:49.093 { 00:18:49.093 "name": null, 00:18:49.093 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:49.093 "is_configured": false, 00:18:49.093 "data_offset": 0, 00:18:49.093 "data_size": 63488 00:18:49.093 }, 00:18:49.093 { 00:18:49.093 "name": null, 00:18:49.093 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:49.093 "is_configured": false, 00:18:49.093 "data_offset": 0, 00:18:49.093 "data_size": 63488 00:18:49.093 }, 00:18:49.093 { 00:18:49.093 "name": "BaseBdev3", 00:18:49.093 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:49.093 "is_configured": true, 00:18:49.093 "data_offset": 2048, 00:18:49.093 "data_size": 63488 00:18:49.093 }, 00:18:49.093 { 00:18:49.093 "name": "BaseBdev4", 00:18:49.093 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:49.093 "is_configured": true, 00:18:49.093 "data_offset": 2048, 00:18:49.093 "data_size": 63488 00:18:49.093 } 00:18:49.093 ] 00:18:49.093 }' 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.093 23:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.357 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.358 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:49.358 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.358 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.617 [2024-12-09 23:00:05.261260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.617 "name": "Existed_Raid", 00:18:49.617 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:49.617 "strip_size_kb": 0, 00:18:49.617 "state": "configuring", 00:18:49.617 "raid_level": "raid1", 00:18:49.617 "superblock": true, 00:18:49.617 "num_base_bdevs": 4, 00:18:49.617 "num_base_bdevs_discovered": 3, 00:18:49.617 "num_base_bdevs_operational": 4, 00:18:49.617 "base_bdevs_list": [ 00:18:49.617 { 00:18:49.617 "name": null, 00:18:49.617 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:49.617 "is_configured": false, 00:18:49.617 "data_offset": 0, 00:18:49.617 "data_size": 63488 00:18:49.617 }, 00:18:49.617 { 00:18:49.617 "name": "BaseBdev2", 00:18:49.617 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:49.617 "is_configured": true, 00:18:49.617 "data_offset": 2048, 00:18:49.617 "data_size": 63488 00:18:49.617 }, 00:18:49.617 { 00:18:49.617 "name": "BaseBdev3", 00:18:49.617 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:49.617 "is_configured": true, 00:18:49.617 "data_offset": 2048, 00:18:49.617 "data_size": 63488 00:18:49.617 }, 00:18:49.617 { 00:18:49.617 "name": "BaseBdev4", 00:18:49.617 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:49.617 "is_configured": true, 00:18:49.617 "data_offset": 2048, 00:18:49.617 "data_size": 63488 00:18:49.617 } 00:18:49.617 ] 00:18:49.617 }' 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.617 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.874 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 864625e7-47fd-49b6-b5b0-5953ad346baf 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.134 [2024-12-09 23:00:05.848688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:50.134 [2024-12-09 23:00:05.849001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:50.134 [2024-12-09 23:00:05.849023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:50.134 [2024-12-09 23:00:05.849311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:50.134 [2024-12-09 23:00:05.849536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:50.134 [2024-12-09 23:00:05.849552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:50.134 NewBaseBdev 00:18:50.134 [2024-12-09 23:00:05.849772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.134 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.134 [ 00:18:50.134 { 00:18:50.134 "name": "NewBaseBdev", 00:18:50.134 "aliases": [ 00:18:50.134 "864625e7-47fd-49b6-b5b0-5953ad346baf" 00:18:50.134 ], 00:18:50.134 "product_name": "Malloc disk", 00:18:50.134 "block_size": 512, 00:18:50.134 "num_blocks": 65536, 00:18:50.134 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:50.135 "assigned_rate_limits": { 00:18:50.135 "rw_ios_per_sec": 0, 00:18:50.135 "rw_mbytes_per_sec": 0, 00:18:50.135 "r_mbytes_per_sec": 0, 00:18:50.135 "w_mbytes_per_sec": 0 00:18:50.135 }, 00:18:50.135 "claimed": true, 00:18:50.135 "claim_type": "exclusive_write", 00:18:50.135 "zoned": false, 00:18:50.135 "supported_io_types": { 00:18:50.135 "read": true, 00:18:50.135 "write": true, 00:18:50.135 "unmap": true, 00:18:50.135 "flush": true, 00:18:50.135 "reset": true, 00:18:50.135 "nvme_admin": false, 00:18:50.135 "nvme_io": false, 00:18:50.135 "nvme_io_md": false, 00:18:50.135 "write_zeroes": true, 00:18:50.135 "zcopy": true, 00:18:50.135 "get_zone_info": false, 00:18:50.135 "zone_management": false, 00:18:50.135 "zone_append": false, 00:18:50.135 "compare": false, 00:18:50.135 "compare_and_write": false, 00:18:50.135 "abort": true, 00:18:50.135 "seek_hole": false, 00:18:50.135 "seek_data": false, 00:18:50.135 "copy": true, 00:18:50.135 "nvme_iov_md": false 00:18:50.135 }, 00:18:50.135 "memory_domains": [ 00:18:50.135 { 00:18:50.135 "dma_device_id": "system", 00:18:50.135 "dma_device_type": 1 00:18:50.135 }, 00:18:50.135 { 00:18:50.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.135 "dma_device_type": 2 00:18:50.135 } 00:18:50.135 ], 00:18:50.135 "driver_specific": {} 00:18:50.135 } 00:18:50.135 ] 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.135 "name": "Existed_Raid", 00:18:50.135 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:50.135 "strip_size_kb": 0, 00:18:50.135 "state": "online", 00:18:50.135 "raid_level": "raid1", 00:18:50.135 "superblock": true, 00:18:50.135 "num_base_bdevs": 4, 00:18:50.135 "num_base_bdevs_discovered": 4, 00:18:50.135 "num_base_bdevs_operational": 4, 00:18:50.135 "base_bdevs_list": [ 00:18:50.135 { 00:18:50.135 "name": "NewBaseBdev", 00:18:50.135 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:50.135 "is_configured": true, 00:18:50.135 "data_offset": 2048, 00:18:50.135 "data_size": 63488 00:18:50.135 }, 00:18:50.135 { 00:18:50.135 "name": "BaseBdev2", 00:18:50.135 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:50.135 "is_configured": true, 00:18:50.135 "data_offset": 2048, 00:18:50.135 "data_size": 63488 00:18:50.135 }, 00:18:50.135 { 00:18:50.135 "name": "BaseBdev3", 00:18:50.135 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:50.135 "is_configured": true, 00:18:50.135 "data_offset": 2048, 00:18:50.135 "data_size": 63488 00:18:50.135 }, 00:18:50.135 { 00:18:50.135 "name": "BaseBdev4", 00:18:50.135 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:50.135 "is_configured": true, 00:18:50.135 "data_offset": 2048, 00:18:50.135 "data_size": 63488 00:18:50.135 } 00:18:50.135 ] 00:18:50.135 }' 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.135 23:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.701 [2024-12-09 23:00:06.353008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.701 "name": "Existed_Raid", 00:18:50.701 "aliases": [ 00:18:50.701 "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96" 00:18:50.701 ], 00:18:50.701 "product_name": "Raid Volume", 00:18:50.701 "block_size": 512, 00:18:50.701 "num_blocks": 63488, 00:18:50.701 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:50.701 "assigned_rate_limits": { 00:18:50.701 "rw_ios_per_sec": 0, 00:18:50.701 "rw_mbytes_per_sec": 0, 00:18:50.701 "r_mbytes_per_sec": 0, 00:18:50.701 "w_mbytes_per_sec": 0 00:18:50.701 }, 00:18:50.701 "claimed": false, 00:18:50.701 "zoned": false, 00:18:50.701 "supported_io_types": { 00:18:50.701 "read": true, 00:18:50.701 "write": true, 00:18:50.701 "unmap": false, 00:18:50.701 "flush": false, 00:18:50.701 "reset": true, 00:18:50.701 "nvme_admin": false, 00:18:50.701 "nvme_io": false, 00:18:50.701 "nvme_io_md": false, 00:18:50.701 "write_zeroes": true, 00:18:50.701 "zcopy": false, 00:18:50.701 "get_zone_info": false, 00:18:50.701 "zone_management": false, 00:18:50.701 "zone_append": false, 00:18:50.701 "compare": false, 00:18:50.701 "compare_and_write": false, 00:18:50.701 "abort": false, 00:18:50.701 "seek_hole": false, 00:18:50.701 "seek_data": false, 00:18:50.701 "copy": false, 00:18:50.701 "nvme_iov_md": false 00:18:50.701 }, 00:18:50.701 "memory_domains": [ 00:18:50.701 { 00:18:50.701 "dma_device_id": "system", 00:18:50.701 "dma_device_type": 1 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.701 "dma_device_type": 2 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "system", 00:18:50.701 "dma_device_type": 1 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.701 "dma_device_type": 2 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "system", 00:18:50.701 "dma_device_type": 1 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.701 "dma_device_type": 2 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "system", 00:18:50.701 "dma_device_type": 1 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.701 "dma_device_type": 2 00:18:50.701 } 00:18:50.701 ], 00:18:50.701 "driver_specific": { 00:18:50.701 "raid": { 00:18:50.701 "uuid": "db00c0d7-13a4-4195-9bb1-55fc4b2b7e96", 00:18:50.701 "strip_size_kb": 0, 00:18:50.701 "state": "online", 00:18:50.701 "raid_level": "raid1", 00:18:50.701 "superblock": true, 00:18:50.701 "num_base_bdevs": 4, 00:18:50.701 "num_base_bdevs_discovered": 4, 00:18:50.701 "num_base_bdevs_operational": 4, 00:18:50.701 "base_bdevs_list": [ 00:18:50.701 { 00:18:50.701 "name": "NewBaseBdev", 00:18:50.701 "uuid": "864625e7-47fd-49b6-b5b0-5953ad346baf", 00:18:50.701 "is_configured": true, 00:18:50.701 "data_offset": 2048, 00:18:50.701 "data_size": 63488 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "name": "BaseBdev2", 00:18:50.701 "uuid": "a171495a-ea06-47f5-b31a-974af007ec9f", 00:18:50.701 "is_configured": true, 00:18:50.701 "data_offset": 2048, 00:18:50.701 "data_size": 63488 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "name": "BaseBdev3", 00:18:50.701 "uuid": "07d4be84-ea29-4f36-be32-6e71601bc5a7", 00:18:50.701 "is_configured": true, 00:18:50.701 "data_offset": 2048, 00:18:50.701 "data_size": 63488 00:18:50.701 }, 00:18:50.701 { 00:18:50.701 "name": "BaseBdev4", 00:18:50.701 "uuid": "60ef19aa-0843-42e2-826a-0df25c7a2e4e", 00:18:50.701 "is_configured": true, 00:18:50.701 "data_offset": 2048, 00:18:50.701 "data_size": 63488 00:18:50.701 } 00:18:50.701 ] 00:18:50.701 } 00:18:50.701 } 00:18:50.701 }' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:50.701 BaseBdev2 00:18:50.701 BaseBdev3 00:18:50.701 BaseBdev4' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.701 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.702 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.702 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:50.702 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.702 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.960 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.960 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.960 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.960 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.961 [2024-12-09 23:00:06.644074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.961 [2024-12-09 23:00:06.644122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.961 [2024-12-09 23:00:06.644218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.961 [2024-12-09 23:00:06.644622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.961 [2024-12-09 23:00:06.644642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74460 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74460 ']' 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74460 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74460 00:18:50.961 killing process with pid 74460 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74460' 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74460 00:18:50.961 [2024-12-09 23:00:06.693248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.961 23:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74460 00:18:51.525 [2024-12-09 23:00:07.180866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.899 ************************************ 00:18:52.899 END TEST raid_state_function_test_sb 00:18:52.899 ************************************ 00:18:52.899 23:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:52.899 00:18:52.899 real 0m12.463s 00:18:52.899 user 0m19.410s 00:18:52.899 sys 0m2.332s 00:18:52.899 23:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.899 23:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.899 23:00:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:52.899 23:00:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:52.899 23:00:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.899 23:00:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.899 ************************************ 00:18:52.899 START TEST raid_superblock_test 00:18:52.899 ************************************ 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75142 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75142 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75142 ']' 00:18:52.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.899 23:00:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.899 [2024-12-09 23:00:08.713519] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:18:52.899 [2024-12-09 23:00:08.713661] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75142 ] 00:18:53.156 [2024-12-09 23:00:08.880791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.413 [2024-12-09 23:00:09.020413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.413 [2024-12-09 23:00:09.266369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.413 [2024-12-09 23:00:09.266432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.976 malloc1 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.976 [2024-12-09 23:00:09.705496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.976 [2024-12-09 23:00:09.705690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.976 [2024-12-09 23:00:09.705746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:53.976 [2024-12-09 23:00:09.705797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.976 [2024-12-09 23:00:09.708323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.976 [2024-12-09 23:00:09.708439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.976 pt1 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.976 malloc2 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.976 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.977 [2024-12-09 23:00:09.769288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.977 [2024-12-09 23:00:09.769377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.977 [2024-12-09 23:00:09.769409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:53.977 [2024-12-09 23:00:09.769422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.977 [2024-12-09 23:00:09.771823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.977 [2024-12-09 23:00:09.771970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.977 pt2 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.977 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.234 malloc3 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.234 [2024-12-09 23:00:09.845722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:54.234 [2024-12-09 23:00:09.845915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.234 [2024-12-09 23:00:09.845972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:54.234 [2024-12-09 23:00:09.846050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.234 [2024-12-09 23:00:09.848687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.234 [2024-12-09 23:00:09.848795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:54.234 pt3 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.234 malloc4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.234 [2024-12-09 23:00:09.914778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:54.234 [2024-12-09 23:00:09.914981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.234 [2024-12-09 23:00:09.915040] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:54.234 [2024-12-09 23:00:09.915090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.234 [2024-12-09 23:00:09.917671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.234 [2024-12-09 23:00:09.917784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:54.234 pt4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.234 [2024-12-09 23:00:09.926795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:54.234 [2024-12-09 23:00:09.929048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:54.234 [2024-12-09 23:00:09.929197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:54.234 [2024-12-09 23:00:09.929289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:54.234 [2024-12-09 23:00:09.929577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:54.234 [2024-12-09 23:00:09.929601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:54.234 [2024-12-09 23:00:09.929928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:54.234 [2024-12-09 23:00:09.930147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:54.234 [2024-12-09 23:00:09.930169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:54.234 [2024-12-09 23:00:09.930364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.234 "name": "raid_bdev1", 00:18:54.234 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:54.234 "strip_size_kb": 0, 00:18:54.234 "state": "online", 00:18:54.234 "raid_level": "raid1", 00:18:54.234 "superblock": true, 00:18:54.234 "num_base_bdevs": 4, 00:18:54.234 "num_base_bdevs_discovered": 4, 00:18:54.234 "num_base_bdevs_operational": 4, 00:18:54.234 "base_bdevs_list": [ 00:18:54.234 { 00:18:54.234 "name": "pt1", 00:18:54.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.234 "is_configured": true, 00:18:54.234 "data_offset": 2048, 00:18:54.234 "data_size": 63488 00:18:54.234 }, 00:18:54.234 { 00:18:54.234 "name": "pt2", 00:18:54.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.234 "is_configured": true, 00:18:54.234 "data_offset": 2048, 00:18:54.234 "data_size": 63488 00:18:54.234 }, 00:18:54.234 { 00:18:54.234 "name": "pt3", 00:18:54.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.234 "is_configured": true, 00:18:54.234 "data_offset": 2048, 00:18:54.234 "data_size": 63488 00:18:54.234 }, 00:18:54.234 { 00:18:54.234 "name": "pt4", 00:18:54.234 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.234 "is_configured": true, 00:18:54.234 "data_offset": 2048, 00:18:54.234 "data_size": 63488 00:18:54.234 } 00:18:54.234 ] 00:18:54.234 }' 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.234 23:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.801 [2024-12-09 23:00:10.422358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.801 "name": "raid_bdev1", 00:18:54.801 "aliases": [ 00:18:54.801 "a2160e48-6f6d-426e-8715-31d7aec2c74b" 00:18:54.801 ], 00:18:54.801 "product_name": "Raid Volume", 00:18:54.801 "block_size": 512, 00:18:54.801 "num_blocks": 63488, 00:18:54.801 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:54.801 "assigned_rate_limits": { 00:18:54.801 "rw_ios_per_sec": 0, 00:18:54.801 "rw_mbytes_per_sec": 0, 00:18:54.801 "r_mbytes_per_sec": 0, 00:18:54.801 "w_mbytes_per_sec": 0 00:18:54.801 }, 00:18:54.801 "claimed": false, 00:18:54.801 "zoned": false, 00:18:54.801 "supported_io_types": { 00:18:54.801 "read": true, 00:18:54.801 "write": true, 00:18:54.801 "unmap": false, 00:18:54.801 "flush": false, 00:18:54.801 "reset": true, 00:18:54.801 "nvme_admin": false, 00:18:54.801 "nvme_io": false, 00:18:54.801 "nvme_io_md": false, 00:18:54.801 "write_zeroes": true, 00:18:54.801 "zcopy": false, 00:18:54.801 "get_zone_info": false, 00:18:54.801 "zone_management": false, 00:18:54.801 "zone_append": false, 00:18:54.801 "compare": false, 00:18:54.801 "compare_and_write": false, 00:18:54.801 "abort": false, 00:18:54.801 "seek_hole": false, 00:18:54.801 "seek_data": false, 00:18:54.801 "copy": false, 00:18:54.801 "nvme_iov_md": false 00:18:54.801 }, 00:18:54.801 "memory_domains": [ 00:18:54.801 { 00:18:54.801 "dma_device_id": "system", 00:18:54.801 "dma_device_type": 1 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.801 "dma_device_type": 2 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "system", 00:18:54.801 "dma_device_type": 1 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.801 "dma_device_type": 2 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "system", 00:18:54.801 "dma_device_type": 1 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.801 "dma_device_type": 2 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "system", 00:18:54.801 "dma_device_type": 1 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.801 "dma_device_type": 2 00:18:54.801 } 00:18:54.801 ], 00:18:54.801 "driver_specific": { 00:18:54.801 "raid": { 00:18:54.801 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:54.801 "strip_size_kb": 0, 00:18:54.801 "state": "online", 00:18:54.801 "raid_level": "raid1", 00:18:54.801 "superblock": true, 00:18:54.801 "num_base_bdevs": 4, 00:18:54.801 "num_base_bdevs_discovered": 4, 00:18:54.801 "num_base_bdevs_operational": 4, 00:18:54.801 "base_bdevs_list": [ 00:18:54.801 { 00:18:54.801 "name": "pt1", 00:18:54.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.801 "is_configured": true, 00:18:54.801 "data_offset": 2048, 00:18:54.801 "data_size": 63488 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "name": "pt2", 00:18:54.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.801 "is_configured": true, 00:18:54.801 "data_offset": 2048, 00:18:54.801 "data_size": 63488 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "name": "pt3", 00:18:54.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.801 "is_configured": true, 00:18:54.801 "data_offset": 2048, 00:18:54.801 "data_size": 63488 00:18:54.801 }, 00:18:54.801 { 00:18:54.801 "name": "pt4", 00:18:54.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.801 "is_configured": true, 00:18:54.801 "data_offset": 2048, 00:18:54.801 "data_size": 63488 00:18:54.801 } 00:18:54.801 ] 00:18:54.801 } 00:18:54.801 } 00:18:54.801 }' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:54.801 pt2 00:18:54.801 pt3 00:18:54.801 pt4' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.801 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.058 [2024-12-09 23:00:10.761958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2160e48-6f6d-426e-8715-31d7aec2c74b 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a2160e48-6f6d-426e-8715-31d7aec2c74b ']' 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.058 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.058 [2024-12-09 23:00:10.809438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.058 [2024-12-09 23:00:10.809500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.059 [2024-12-09 23:00:10.809603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.059 [2024-12-09 23:00:10.809694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.059 [2024-12-09 23:00:10.809712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.059 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.317 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.317 [2024-12-09 23:00:10.969198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:55.317 [2024-12-09 23:00:10.971279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:55.318 [2024-12-09 23:00:10.971393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:55.318 [2024-12-09 23:00:10.971472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:55.318 [2024-12-09 23:00:10.971605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:55.318 [2024-12-09 23:00:10.971724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:55.318 [2024-12-09 23:00:10.971797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:55.318 [2024-12-09 23:00:10.971890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:55.318 [2024-12-09 23:00:10.971954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.318 [2024-12-09 23:00:10.972006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:55.318 request: 00:18:55.318 { 00:18:55.318 "name": "raid_bdev1", 00:18:55.318 "raid_level": "raid1", 00:18:55.318 "base_bdevs": [ 00:18:55.318 "malloc1", 00:18:55.318 "malloc2", 00:18:55.318 "malloc3", 00:18:55.318 "malloc4" 00:18:55.318 ], 00:18:55.318 "superblock": false, 00:18:55.318 "method": "bdev_raid_create", 00:18:55.318 "req_id": 1 00:18:55.318 } 00:18:55.318 Got JSON-RPC error response 00:18:55.318 response: 00:18:55.318 { 00:18:55.318 "code": -17, 00:18:55.318 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:55.318 } 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.318 23:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.318 [2024-12-09 23:00:11.041018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.318 [2024-12-09 23:00:11.041210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.318 [2024-12-09 23:00:11.041237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:55.318 [2024-12-09 23:00:11.041252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.318 [2024-12-09 23:00:11.043566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.318 [2024-12-09 23:00:11.043616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.318 [2024-12-09 23:00:11.043716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:55.318 [2024-12-09 23:00:11.043779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.318 pt1 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.318 "name": "raid_bdev1", 00:18:55.318 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:55.318 "strip_size_kb": 0, 00:18:55.318 "state": "configuring", 00:18:55.318 "raid_level": "raid1", 00:18:55.318 "superblock": true, 00:18:55.318 "num_base_bdevs": 4, 00:18:55.318 "num_base_bdevs_discovered": 1, 00:18:55.318 "num_base_bdevs_operational": 4, 00:18:55.318 "base_bdevs_list": [ 00:18:55.318 { 00:18:55.318 "name": "pt1", 00:18:55.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.318 "is_configured": true, 00:18:55.318 "data_offset": 2048, 00:18:55.318 "data_size": 63488 00:18:55.318 }, 00:18:55.318 { 00:18:55.318 "name": null, 00:18:55.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.318 "is_configured": false, 00:18:55.318 "data_offset": 2048, 00:18:55.318 "data_size": 63488 00:18:55.318 }, 00:18:55.318 { 00:18:55.318 "name": null, 00:18:55.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.318 "is_configured": false, 00:18:55.318 "data_offset": 2048, 00:18:55.318 "data_size": 63488 00:18:55.318 }, 00:18:55.318 { 00:18:55.318 "name": null, 00:18:55.318 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.318 "is_configured": false, 00:18:55.318 "data_offset": 2048, 00:18:55.318 "data_size": 63488 00:18:55.318 } 00:18:55.318 ] 00:18:55.318 }' 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.318 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.885 [2024-12-09 23:00:11.532660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.885 [2024-12-09 23:00:11.532866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.885 [2024-12-09 23:00:11.532929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:55.885 [2024-12-09 23:00:11.532971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.885 [2024-12-09 23:00:11.533564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.885 [2024-12-09 23:00:11.533655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.885 [2024-12-09 23:00:11.533794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:55.885 [2024-12-09 23:00:11.533865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.885 pt2 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.885 [2024-12-09 23:00:11.544667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.885 "name": "raid_bdev1", 00:18:55.885 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:55.885 "strip_size_kb": 0, 00:18:55.885 "state": "configuring", 00:18:55.885 "raid_level": "raid1", 00:18:55.885 "superblock": true, 00:18:55.885 "num_base_bdevs": 4, 00:18:55.885 "num_base_bdevs_discovered": 1, 00:18:55.885 "num_base_bdevs_operational": 4, 00:18:55.885 "base_bdevs_list": [ 00:18:55.885 { 00:18:55.885 "name": "pt1", 00:18:55.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.885 "is_configured": true, 00:18:55.885 "data_offset": 2048, 00:18:55.885 "data_size": 63488 00:18:55.885 }, 00:18:55.885 { 00:18:55.885 "name": null, 00:18:55.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.885 "is_configured": false, 00:18:55.885 "data_offset": 0, 00:18:55.885 "data_size": 63488 00:18:55.885 }, 00:18:55.885 { 00:18:55.885 "name": null, 00:18:55.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:55.885 "is_configured": false, 00:18:55.885 "data_offset": 2048, 00:18:55.885 "data_size": 63488 00:18:55.885 }, 00:18:55.885 { 00:18:55.885 "name": null, 00:18:55.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:55.885 "is_configured": false, 00:18:55.885 "data_offset": 2048, 00:18:55.885 "data_size": 63488 00:18:55.885 } 00:18:55.885 ] 00:18:55.885 }' 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.885 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.143 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:56.143 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:56.143 23:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.143 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.143 23:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.143 [2024-12-09 23:00:11.996660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.143 [2024-12-09 23:00:11.996764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.143 [2024-12-09 23:00:11.996790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:56.143 [2024-12-09 23:00:11.996803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.143 [2024-12-09 23:00:11.997290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.144 [2024-12-09 23:00:11.997312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.144 [2024-12-09 23:00:11.997411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:56.144 [2024-12-09 23:00:11.997438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.400 pt2 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.400 [2024-12-09 23:00:12.008642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:56.400 [2024-12-09 23:00:12.008718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.400 [2024-12-09 23:00:12.008741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:56.400 [2024-12-09 23:00:12.008754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.400 [2024-12-09 23:00:12.009238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.400 [2024-12-09 23:00:12.009259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:56.400 [2024-12-09 23:00:12.009346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:56.400 [2024-12-09 23:00:12.009371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:56.400 pt3 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.400 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.400 [2024-12-09 23:00:12.020614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:56.400 [2024-12-09 23:00:12.020774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.400 [2024-12-09 23:00:12.020803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:56.400 [2024-12-09 23:00:12.020816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.400 [2024-12-09 23:00:12.021326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.400 [2024-12-09 23:00:12.021358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:56.400 [2024-12-09 23:00:12.021446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:56.401 [2024-12-09 23:00:12.021513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:56.401 [2024-12-09 23:00:12.021686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:56.401 [2024-12-09 23:00:12.021698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:56.401 [2024-12-09 23:00:12.021986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:56.401 [2024-12-09 23:00:12.022186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:56.401 [2024-12-09 23:00:12.022203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:56.401 [2024-12-09 23:00:12.022367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.401 pt4 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.401 "name": "raid_bdev1", 00:18:56.401 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:56.401 "strip_size_kb": 0, 00:18:56.401 "state": "online", 00:18:56.401 "raid_level": "raid1", 00:18:56.401 "superblock": true, 00:18:56.401 "num_base_bdevs": 4, 00:18:56.401 "num_base_bdevs_discovered": 4, 00:18:56.401 "num_base_bdevs_operational": 4, 00:18:56.401 "base_bdevs_list": [ 00:18:56.401 { 00:18:56.401 "name": "pt1", 00:18:56.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.401 "is_configured": true, 00:18:56.401 "data_offset": 2048, 00:18:56.401 "data_size": 63488 00:18:56.401 }, 00:18:56.401 { 00:18:56.401 "name": "pt2", 00:18:56.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.401 "is_configured": true, 00:18:56.401 "data_offset": 2048, 00:18:56.401 "data_size": 63488 00:18:56.401 }, 00:18:56.401 { 00:18:56.401 "name": "pt3", 00:18:56.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.401 "is_configured": true, 00:18:56.401 "data_offset": 2048, 00:18:56.401 "data_size": 63488 00:18:56.401 }, 00:18:56.401 { 00:18:56.401 "name": "pt4", 00:18:56.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.401 "is_configured": true, 00:18:56.401 "data_offset": 2048, 00:18:56.401 "data_size": 63488 00:18:56.401 } 00:18:56.401 ] 00:18:56.401 }' 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.401 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.659 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.917 [2024-12-09 23:00:12.516362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.917 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.917 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.917 "name": "raid_bdev1", 00:18:56.917 "aliases": [ 00:18:56.917 "a2160e48-6f6d-426e-8715-31d7aec2c74b" 00:18:56.917 ], 00:18:56.917 "product_name": "Raid Volume", 00:18:56.917 "block_size": 512, 00:18:56.917 "num_blocks": 63488, 00:18:56.917 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:56.917 "assigned_rate_limits": { 00:18:56.917 "rw_ios_per_sec": 0, 00:18:56.917 "rw_mbytes_per_sec": 0, 00:18:56.917 "r_mbytes_per_sec": 0, 00:18:56.917 "w_mbytes_per_sec": 0 00:18:56.917 }, 00:18:56.917 "claimed": false, 00:18:56.917 "zoned": false, 00:18:56.917 "supported_io_types": { 00:18:56.917 "read": true, 00:18:56.917 "write": true, 00:18:56.917 "unmap": false, 00:18:56.917 "flush": false, 00:18:56.917 "reset": true, 00:18:56.917 "nvme_admin": false, 00:18:56.917 "nvme_io": false, 00:18:56.917 "nvme_io_md": false, 00:18:56.917 "write_zeroes": true, 00:18:56.917 "zcopy": false, 00:18:56.917 "get_zone_info": false, 00:18:56.917 "zone_management": false, 00:18:56.917 "zone_append": false, 00:18:56.917 "compare": false, 00:18:56.917 "compare_and_write": false, 00:18:56.917 "abort": false, 00:18:56.917 "seek_hole": false, 00:18:56.917 "seek_data": false, 00:18:56.917 "copy": false, 00:18:56.917 "nvme_iov_md": false 00:18:56.917 }, 00:18:56.917 "memory_domains": [ 00:18:56.917 { 00:18:56.917 "dma_device_id": "system", 00:18:56.917 "dma_device_type": 1 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.917 "dma_device_type": 2 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "system", 00:18:56.917 "dma_device_type": 1 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.917 "dma_device_type": 2 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "system", 00:18:56.917 "dma_device_type": 1 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.917 "dma_device_type": 2 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "system", 00:18:56.917 "dma_device_type": 1 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.917 "dma_device_type": 2 00:18:56.917 } 00:18:56.917 ], 00:18:56.917 "driver_specific": { 00:18:56.917 "raid": { 00:18:56.917 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:56.917 "strip_size_kb": 0, 00:18:56.917 "state": "online", 00:18:56.917 "raid_level": "raid1", 00:18:56.917 "superblock": true, 00:18:56.917 "num_base_bdevs": 4, 00:18:56.917 "num_base_bdevs_discovered": 4, 00:18:56.917 "num_base_bdevs_operational": 4, 00:18:56.917 "base_bdevs_list": [ 00:18:56.917 { 00:18:56.917 "name": "pt1", 00:18:56.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.917 "is_configured": true, 00:18:56.917 "data_offset": 2048, 00:18:56.917 "data_size": 63488 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "name": "pt2", 00:18:56.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.917 "is_configured": true, 00:18:56.917 "data_offset": 2048, 00:18:56.917 "data_size": 63488 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "name": "pt3", 00:18:56.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.917 "is_configured": true, 00:18:56.917 "data_offset": 2048, 00:18:56.917 "data_size": 63488 00:18:56.917 }, 00:18:56.917 { 00:18:56.917 "name": "pt4", 00:18:56.917 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.917 "is_configured": true, 00:18:56.917 "data_offset": 2048, 00:18:56.917 "data_size": 63488 00:18:56.917 } 00:18:56.918 ] 00:18:56.918 } 00:18:56.918 } 00:18:56.918 }' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:56.918 pt2 00:18:56.918 pt3 00:18:56.918 pt4' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.918 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.188 [2024-12-09 23:00:12.871716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a2160e48-6f6d-426e-8715-31d7aec2c74b '!=' a2160e48-6f6d-426e-8715-31d7aec2c74b ']' 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.188 [2024-12-09 23:00:12.911380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.188 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.189 "name": "raid_bdev1", 00:18:57.189 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:57.189 "strip_size_kb": 0, 00:18:57.189 "state": "online", 00:18:57.189 "raid_level": "raid1", 00:18:57.189 "superblock": true, 00:18:57.189 "num_base_bdevs": 4, 00:18:57.189 "num_base_bdevs_discovered": 3, 00:18:57.189 "num_base_bdevs_operational": 3, 00:18:57.189 "base_bdevs_list": [ 00:18:57.189 { 00:18:57.189 "name": null, 00:18:57.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.189 "is_configured": false, 00:18:57.189 "data_offset": 0, 00:18:57.189 "data_size": 63488 00:18:57.189 }, 00:18:57.189 { 00:18:57.189 "name": "pt2", 00:18:57.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.189 "is_configured": true, 00:18:57.189 "data_offset": 2048, 00:18:57.189 "data_size": 63488 00:18:57.189 }, 00:18:57.189 { 00:18:57.189 "name": "pt3", 00:18:57.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.189 "is_configured": true, 00:18:57.189 "data_offset": 2048, 00:18:57.189 "data_size": 63488 00:18:57.189 }, 00:18:57.189 { 00:18:57.189 "name": "pt4", 00:18:57.189 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.189 "is_configured": true, 00:18:57.189 "data_offset": 2048, 00:18:57.189 "data_size": 63488 00:18:57.189 } 00:18:57.189 ] 00:18:57.189 }' 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.189 23:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.753 [2024-12-09 23:00:13.362598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.753 [2024-12-09 23:00:13.362790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.753 [2024-12-09 23:00:13.362933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.753 [2024-12-09 23:00:13.363071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.753 [2024-12-09 23:00:13.363138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.753 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.754 [2024-12-09 23:00:13.462419] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:57.754 [2024-12-09 23:00:13.462551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.754 [2024-12-09 23:00:13.462580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:57.754 [2024-12-09 23:00:13.462595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.754 [2024-12-09 23:00:13.465229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.754 [2024-12-09 23:00:13.465393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:57.754 [2024-12-09 23:00:13.465536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:57.754 [2024-12-09 23:00:13.465603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.754 pt2 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.754 "name": "raid_bdev1", 00:18:57.754 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:57.754 "strip_size_kb": 0, 00:18:57.754 "state": "configuring", 00:18:57.754 "raid_level": "raid1", 00:18:57.754 "superblock": true, 00:18:57.754 "num_base_bdevs": 4, 00:18:57.754 "num_base_bdevs_discovered": 1, 00:18:57.754 "num_base_bdevs_operational": 3, 00:18:57.754 "base_bdevs_list": [ 00:18:57.754 { 00:18:57.754 "name": null, 00:18:57.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.754 "is_configured": false, 00:18:57.754 "data_offset": 2048, 00:18:57.754 "data_size": 63488 00:18:57.754 }, 00:18:57.754 { 00:18:57.754 "name": "pt2", 00:18:57.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.754 "is_configured": true, 00:18:57.754 "data_offset": 2048, 00:18:57.754 "data_size": 63488 00:18:57.754 }, 00:18:57.754 { 00:18:57.754 "name": null, 00:18:57.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:57.754 "is_configured": false, 00:18:57.754 "data_offset": 2048, 00:18:57.754 "data_size": 63488 00:18:57.754 }, 00:18:57.754 { 00:18:57.754 "name": null, 00:18:57.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:57.754 "is_configured": false, 00:18:57.754 "data_offset": 2048, 00:18:57.754 "data_size": 63488 00:18:57.754 } 00:18:57.754 ] 00:18:57.754 }' 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.754 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.011 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:58.011 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:58.011 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:58.011 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.011 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.011 [2024-12-09 23:00:13.865783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:58.011 [2024-12-09 23:00:13.865984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.011 [2024-12-09 23:00:13.866043] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:58.011 [2024-12-09 23:00:13.866082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.011 [2024-12-09 23:00:13.866625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.011 [2024-12-09 23:00:13.866701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:58.012 [2024-12-09 23:00:13.866837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:58.012 [2024-12-09 23:00:13.866902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:58.269 pt3 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.269 "name": "raid_bdev1", 00:18:58.269 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:58.269 "strip_size_kb": 0, 00:18:58.269 "state": "configuring", 00:18:58.269 "raid_level": "raid1", 00:18:58.269 "superblock": true, 00:18:58.269 "num_base_bdevs": 4, 00:18:58.269 "num_base_bdevs_discovered": 2, 00:18:58.269 "num_base_bdevs_operational": 3, 00:18:58.269 "base_bdevs_list": [ 00:18:58.269 { 00:18:58.269 "name": null, 00:18:58.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.269 "is_configured": false, 00:18:58.269 "data_offset": 2048, 00:18:58.269 "data_size": 63488 00:18:58.269 }, 00:18:58.269 { 00:18:58.269 "name": "pt2", 00:18:58.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.269 "is_configured": true, 00:18:58.269 "data_offset": 2048, 00:18:58.269 "data_size": 63488 00:18:58.269 }, 00:18:58.269 { 00:18:58.269 "name": "pt3", 00:18:58.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.269 "is_configured": true, 00:18:58.269 "data_offset": 2048, 00:18:58.269 "data_size": 63488 00:18:58.269 }, 00:18:58.269 { 00:18:58.269 "name": null, 00:18:58.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.269 "is_configured": false, 00:18:58.269 "data_offset": 2048, 00:18:58.269 "data_size": 63488 00:18:58.269 } 00:18:58.269 ] 00:18:58.269 }' 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.269 23:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.526 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:58.526 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:58.526 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:58.526 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:58.526 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.526 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.526 [2024-12-09 23:00:14.301075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:58.526 [2024-12-09 23:00:14.301281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.526 [2024-12-09 23:00:14.301316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:58.526 [2024-12-09 23:00:14.301328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.526 [2024-12-09 23:00:14.301841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.526 [2024-12-09 23:00:14.301865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:58.526 [2024-12-09 23:00:14.301966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:58.526 [2024-12-09 23:00:14.301993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:58.526 [2024-12-09 23:00:14.302143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:58.526 [2024-12-09 23:00:14.302153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:58.526 [2024-12-09 23:00:14.302412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:58.527 [2024-12-09 23:00:14.302605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:58.527 [2024-12-09 23:00:14.302623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:58.527 [2024-12-09 23:00:14.302779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.527 pt4 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.527 "name": "raid_bdev1", 00:18:58.527 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:58.527 "strip_size_kb": 0, 00:18:58.527 "state": "online", 00:18:58.527 "raid_level": "raid1", 00:18:58.527 "superblock": true, 00:18:58.527 "num_base_bdevs": 4, 00:18:58.527 "num_base_bdevs_discovered": 3, 00:18:58.527 "num_base_bdevs_operational": 3, 00:18:58.527 "base_bdevs_list": [ 00:18:58.527 { 00:18:58.527 "name": null, 00:18:58.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.527 "is_configured": false, 00:18:58.527 "data_offset": 2048, 00:18:58.527 "data_size": 63488 00:18:58.527 }, 00:18:58.527 { 00:18:58.527 "name": "pt2", 00:18:58.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.527 "is_configured": true, 00:18:58.527 "data_offset": 2048, 00:18:58.527 "data_size": 63488 00:18:58.527 }, 00:18:58.527 { 00:18:58.527 "name": "pt3", 00:18:58.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:58.527 "is_configured": true, 00:18:58.527 "data_offset": 2048, 00:18:58.527 "data_size": 63488 00:18:58.527 }, 00:18:58.527 { 00:18:58.527 "name": "pt4", 00:18:58.527 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:58.527 "is_configured": true, 00:18:58.527 "data_offset": 2048, 00:18:58.527 "data_size": 63488 00:18:58.527 } 00:18:58.527 ] 00:18:58.527 }' 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.527 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.092 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:59.092 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.092 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.092 [2024-12-09 23:00:14.768614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.092 [2024-12-09 23:00:14.768746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.092 [2024-12-09 23:00:14.768885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.092 [2024-12-09 23:00:14.769018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.092 [2024-12-09 23:00:14.769084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:59.092 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.092 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.093 [2024-12-09 23:00:14.844640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:59.093 [2024-12-09 23:00:14.844839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.093 [2024-12-09 23:00:14.844909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:59.093 [2024-12-09 23:00:14.844968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.093 [2024-12-09 23:00:14.847550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.093 [2024-12-09 23:00:14.847660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:59.093 [2024-12-09 23:00:14.847813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:59.093 [2024-12-09 23:00:14.847924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.093 [2024-12-09 23:00:14.848135] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:59.093 [2024-12-09 23:00:14.848206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.093 [2024-12-09 23:00:14.848290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:59.093 [2024-12-09 23:00:14.848439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.093 [2024-12-09 23:00:14.848629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:59.093 pt1 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.093 "name": "raid_bdev1", 00:18:59.093 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:59.093 "strip_size_kb": 0, 00:18:59.093 "state": "configuring", 00:18:59.093 "raid_level": "raid1", 00:18:59.093 "superblock": true, 00:18:59.093 "num_base_bdevs": 4, 00:18:59.093 "num_base_bdevs_discovered": 2, 00:18:59.093 "num_base_bdevs_operational": 3, 00:18:59.093 "base_bdevs_list": [ 00:18:59.093 { 00:18:59.093 "name": null, 00:18:59.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.093 "is_configured": false, 00:18:59.093 "data_offset": 2048, 00:18:59.093 "data_size": 63488 00:18:59.093 }, 00:18:59.093 { 00:18:59.093 "name": "pt2", 00:18:59.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.093 "is_configured": true, 00:18:59.093 "data_offset": 2048, 00:18:59.093 "data_size": 63488 00:18:59.093 }, 00:18:59.093 { 00:18:59.093 "name": "pt3", 00:18:59.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.093 "is_configured": true, 00:18:59.093 "data_offset": 2048, 00:18:59.093 "data_size": 63488 00:18:59.093 }, 00:18:59.093 { 00:18:59.093 "name": null, 00:18:59.093 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.093 "is_configured": false, 00:18:59.093 "data_offset": 2048, 00:18:59.093 "data_size": 63488 00:18:59.093 } 00:18:59.093 ] 00:18:59.093 }' 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.093 23:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.662 [2024-12-09 23:00:15.400538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:59.662 [2024-12-09 23:00:15.400740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.662 [2024-12-09 23:00:15.400796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:59.662 [2024-12-09 23:00:15.400840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.662 [2024-12-09 23:00:15.401410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.662 [2024-12-09 23:00:15.401521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:59.662 [2024-12-09 23:00:15.401704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:59.662 [2024-12-09 23:00:15.401776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:59.662 [2024-12-09 23:00:15.401982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:59.662 [2024-12-09 23:00:15.402034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:59.662 [2024-12-09 23:00:15.402361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:59.662 [2024-12-09 23:00:15.402615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:59.662 [2024-12-09 23:00:15.402674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:59.662 [2024-12-09 23:00:15.402898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.662 pt4 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.662 "name": "raid_bdev1", 00:18:59.662 "uuid": "a2160e48-6f6d-426e-8715-31d7aec2c74b", 00:18:59.662 "strip_size_kb": 0, 00:18:59.662 "state": "online", 00:18:59.662 "raid_level": "raid1", 00:18:59.662 "superblock": true, 00:18:59.662 "num_base_bdevs": 4, 00:18:59.662 "num_base_bdevs_discovered": 3, 00:18:59.662 "num_base_bdevs_operational": 3, 00:18:59.662 "base_bdevs_list": [ 00:18:59.662 { 00:18:59.662 "name": null, 00:18:59.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.662 "is_configured": false, 00:18:59.662 "data_offset": 2048, 00:18:59.662 "data_size": 63488 00:18:59.662 }, 00:18:59.662 { 00:18:59.662 "name": "pt2", 00:18:59.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.662 "is_configured": true, 00:18:59.662 "data_offset": 2048, 00:18:59.662 "data_size": 63488 00:18:59.662 }, 00:18:59.662 { 00:18:59.662 "name": "pt3", 00:18:59.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.662 "is_configured": true, 00:18:59.662 "data_offset": 2048, 00:18:59.662 "data_size": 63488 00:18:59.662 }, 00:18:59.662 { 00:18:59.662 "name": "pt4", 00:18:59.662 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.662 "is_configured": true, 00:18:59.662 "data_offset": 2048, 00:18:59.662 "data_size": 63488 00:18:59.662 } 00:18:59.662 ] 00:18:59.662 }' 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.662 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:00.229 [2024-12-09 23:00:15.848072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a2160e48-6f6d-426e-8715-31d7aec2c74b '!=' a2160e48-6f6d-426e-8715-31d7aec2c74b ']' 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75142 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75142 ']' 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75142 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75142 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.229 killing process with pid 75142 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75142' 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75142 00:19:00.229 [2024-12-09 23:00:15.936306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.229 [2024-12-09 23:00:15.936453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.229 23:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75142 00:19:00.229 [2024-12-09 23:00:15.936565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.229 [2024-12-09 23:00:15.936585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:00.798 [2024-12-09 23:00:16.402809] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.172 23:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:02.172 00:19:02.172 real 0m8.998s 00:19:02.172 user 0m13.942s 00:19:02.172 sys 0m1.721s 00:19:02.172 23:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.172 23:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.172 ************************************ 00:19:02.172 END TEST raid_superblock_test 00:19:02.172 ************************************ 00:19:02.172 23:00:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:19:02.172 23:00:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:02.172 23:00:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.172 23:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.172 ************************************ 00:19:02.172 START TEST raid_read_error_test 00:19:02.172 ************************************ 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:02.172 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zxh7GAK2O1 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75633 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75633 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75633 ']' 00:19:02.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.173 23:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.173 [2024-12-09 23:00:17.821291] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:19:02.173 [2024-12-09 23:00:17.821572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75633 ] 00:19:02.173 [2024-12-09 23:00:18.008775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.431 [2024-12-09 23:00:18.141907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.689 [2024-12-09 23:00:18.362049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.689 [2024-12-09 23:00:18.362241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.948 BaseBdev1_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.948 true 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.948 [2024-12-09 23:00:18.710243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:02.948 [2024-12-09 23:00:18.710430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.948 [2024-12-09 23:00:18.710512] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:02.948 [2024-12-09 23:00:18.710557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.948 [2024-12-09 23:00:18.712743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.948 [2024-12-09 23:00:18.712847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:02.948 BaseBdev1 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.948 BaseBdev2_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.948 true 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.948 [2024-12-09 23:00:18.778125] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:02.948 [2024-12-09 23:00:18.778301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.948 [2024-12-09 23:00:18.778357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:02.948 [2024-12-09 23:00:18.778409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.948 [2024-12-09 23:00:18.780562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.948 [2024-12-09 23:00:18.780656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:02.948 BaseBdev2 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.948 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 BaseBdev3_malloc 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 true 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 [2024-12-09 23:00:18.857126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:03.206 [2024-12-09 23:00:18.857289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.206 [2024-12-09 23:00:18.857319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:03.206 [2024-12-09 23:00:18.857334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.206 [2024-12-09 23:00:18.859600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.206 [2024-12-09 23:00:18.859646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:03.206 BaseBdev3 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 BaseBdev4_malloc 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 true 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 [2024-12-09 23:00:18.924302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:03.206 [2024-12-09 23:00:18.924377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.206 [2024-12-09 23:00:18.924398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:03.206 [2024-12-09 23:00:18.924411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.206 [2024-12-09 23:00:18.926497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.206 [2024-12-09 23:00:18.926606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:03.206 BaseBdev4 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.206 [2024-12-09 23:00:18.936373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.206 [2024-12-09 23:00:18.938469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.206 [2024-12-09 23:00:18.938574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.206 [2024-12-09 23:00:18.938662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:03.206 [2024-12-09 23:00:18.938931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:03.206 [2024-12-09 23:00:18.938947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:03.206 [2024-12-09 23:00:18.939229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:03.206 [2024-12-09 23:00:18.939415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:03.206 [2024-12-09 23:00:18.939426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:03.206 [2024-12-09 23:00:18.939656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.206 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.207 "name": "raid_bdev1", 00:19:03.207 "uuid": "e69df1f4-7112-49b0-b10e-90b783f26736", 00:19:03.207 "strip_size_kb": 0, 00:19:03.207 "state": "online", 00:19:03.207 "raid_level": "raid1", 00:19:03.207 "superblock": true, 00:19:03.207 "num_base_bdevs": 4, 00:19:03.207 "num_base_bdevs_discovered": 4, 00:19:03.207 "num_base_bdevs_operational": 4, 00:19:03.207 "base_bdevs_list": [ 00:19:03.207 { 00:19:03.207 "name": "BaseBdev1", 00:19:03.207 "uuid": "f3253645-7001-538d-ad8c-2202254b8e30", 00:19:03.207 "is_configured": true, 00:19:03.207 "data_offset": 2048, 00:19:03.207 "data_size": 63488 00:19:03.207 }, 00:19:03.207 { 00:19:03.207 "name": "BaseBdev2", 00:19:03.207 "uuid": "9d512d7a-2f48-5a19-8acb-ed61a5520eb2", 00:19:03.207 "is_configured": true, 00:19:03.207 "data_offset": 2048, 00:19:03.207 "data_size": 63488 00:19:03.207 }, 00:19:03.207 { 00:19:03.207 "name": "BaseBdev3", 00:19:03.207 "uuid": "b3bf5bc9-ac53-5341-a905-55fcec070802", 00:19:03.207 "is_configured": true, 00:19:03.207 "data_offset": 2048, 00:19:03.207 "data_size": 63488 00:19:03.207 }, 00:19:03.207 { 00:19:03.207 "name": "BaseBdev4", 00:19:03.207 "uuid": "78b11e97-b881-569a-bbf6-d7e6c6d985fb", 00:19:03.207 "is_configured": true, 00:19:03.207 "data_offset": 2048, 00:19:03.207 "data_size": 63488 00:19:03.207 } 00:19:03.207 ] 00:19:03.207 }' 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.207 23:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.779 23:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:03.779 23:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:03.779 [2024-12-09 23:00:19.520543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.716 "name": "raid_bdev1", 00:19:04.716 "uuid": "e69df1f4-7112-49b0-b10e-90b783f26736", 00:19:04.716 "strip_size_kb": 0, 00:19:04.716 "state": "online", 00:19:04.716 "raid_level": "raid1", 00:19:04.716 "superblock": true, 00:19:04.716 "num_base_bdevs": 4, 00:19:04.716 "num_base_bdevs_discovered": 4, 00:19:04.716 "num_base_bdevs_operational": 4, 00:19:04.716 "base_bdevs_list": [ 00:19:04.716 { 00:19:04.716 "name": "BaseBdev1", 00:19:04.716 "uuid": "f3253645-7001-538d-ad8c-2202254b8e30", 00:19:04.716 "is_configured": true, 00:19:04.716 "data_offset": 2048, 00:19:04.716 "data_size": 63488 00:19:04.716 }, 00:19:04.716 { 00:19:04.716 "name": "BaseBdev2", 00:19:04.716 "uuid": "9d512d7a-2f48-5a19-8acb-ed61a5520eb2", 00:19:04.716 "is_configured": true, 00:19:04.716 "data_offset": 2048, 00:19:04.716 "data_size": 63488 00:19:04.716 }, 00:19:04.716 { 00:19:04.716 "name": "BaseBdev3", 00:19:04.716 "uuid": "b3bf5bc9-ac53-5341-a905-55fcec070802", 00:19:04.716 "is_configured": true, 00:19:04.716 "data_offset": 2048, 00:19:04.716 "data_size": 63488 00:19:04.716 }, 00:19:04.716 { 00:19:04.716 "name": "BaseBdev4", 00:19:04.716 "uuid": "78b11e97-b881-569a-bbf6-d7e6c6d985fb", 00:19:04.716 "is_configured": true, 00:19:04.716 "data_offset": 2048, 00:19:04.716 "data_size": 63488 00:19:04.716 } 00:19:04.716 ] 00:19:04.716 }' 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.716 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.977 [2024-12-09 23:00:20.818785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.977 [2024-12-09 23:00:20.818849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.977 [2024-12-09 23:00:20.822250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.977 [2024-12-09 23:00:20.822370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.977 [2024-12-09 23:00:20.822562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.977 [2024-12-09 23:00:20.822634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:04.977 { 00:19:04.977 "results": [ 00:19:04.977 { 00:19:04.977 "job": "raid_bdev1", 00:19:04.977 "core_mask": "0x1", 00:19:04.977 "workload": "randrw", 00:19:04.977 "percentage": 50, 00:19:04.977 "status": "finished", 00:19:04.977 "queue_depth": 1, 00:19:04.977 "io_size": 131072, 00:19:04.977 "runtime": 1.299141, 00:19:04.977 "iops": 9626.36080302292, 00:19:04.977 "mibps": 1203.295100377865, 00:19:04.977 "io_failed": 0, 00:19:04.977 "io_timeout": 0, 00:19:04.977 "avg_latency_us": 100.62169159676718, 00:19:04.977 "min_latency_us": 25.9353711790393, 00:19:04.977 "max_latency_us": 1745.7187772925763 00:19:04.977 } 00:19:04.977 ], 00:19:04.977 "core_count": 1 00:19:04.977 } 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75633 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75633 ']' 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75633 00:19:04.977 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75633 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75633' 00:19:05.237 killing process with pid 75633 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75633 00:19:05.237 [2024-12-09 23:00:20.876066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.237 23:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75633 00:19:05.495 [2024-12-09 23:00:21.236485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zxh7GAK2O1 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.872 ************************************ 00:19:06.872 END TEST raid_read_error_test 00:19:06.872 ************************************ 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:06.872 00:19:06.872 real 0m4.831s 00:19:06.872 user 0m5.622s 00:19:06.872 sys 0m0.634s 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.872 23:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 23:00:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:19:06.872 23:00:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:06.872 23:00:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.872 23:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 ************************************ 00:19:06.872 START TEST raid_write_error_test 00:19:06.872 ************************************ 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jC2qYSsn8j 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75780 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75780 00:19:06.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75780 ']' 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.872 23:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.872 [2024-12-09 23:00:22.685102] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:19:06.872 [2024-12-09 23:00:22.685321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75780 ] 00:19:07.130 [2024-12-09 23:00:22.858083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.130 [2024-12-09 23:00:22.979753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.388 [2024-12-09 23:00:23.194137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.388 [2024-12-09 23:00:23.194274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 BaseBdev1_malloc 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 true 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 [2024-12-09 23:00:23.628802] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:07.954 [2024-12-09 23:00:23.628950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.954 [2024-12-09 23:00:23.628998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:07.954 [2024-12-09 23:00:23.629037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.954 [2024-12-09 23:00:23.631198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.954 [2024-12-09 23:00:23.631280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.954 BaseBdev1 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 BaseBdev2_malloc 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 true 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.954 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.954 [2024-12-09 23:00:23.694901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:07.954 [2024-12-09 23:00:23.694976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.954 [2024-12-09 23:00:23.694999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:07.954 [2024-12-09 23:00:23.695012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.954 [2024-12-09 23:00:23.697405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.955 [2024-12-09 23:00:23.697567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:07.955 BaseBdev2 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.955 BaseBdev3_malloc 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.955 true 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.955 [2024-12-09 23:00:23.777687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:07.955 [2024-12-09 23:00:23.777754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.955 [2024-12-09 23:00:23.777778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:07.955 [2024-12-09 23:00:23.777790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.955 [2024-12-09 23:00:23.779936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.955 [2024-12-09 23:00:23.779974] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:07.955 BaseBdev3 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.955 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.212 BaseBdev4_malloc 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.212 true 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.212 [2024-12-09 23:00:23.847672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:08.212 [2024-12-09 23:00:23.847729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.212 [2024-12-09 23:00:23.847748] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:08.212 [2024-12-09 23:00:23.847759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.212 [2024-12-09 23:00:23.850002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.212 [2024-12-09 23:00:23.850102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:08.212 BaseBdev4 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.212 [2024-12-09 23:00:23.859720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.212 [2024-12-09 23:00:23.861759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.212 [2024-12-09 23:00:23.861904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.212 [2024-12-09 23:00:23.862006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:08.212 [2024-12-09 23:00:23.862264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:19:08.212 [2024-12-09 23:00:23.862313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.212 [2024-12-09 23:00:23.862587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:19:08.212 [2024-12-09 23:00:23.862801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:19:08.212 [2024-12-09 23:00:23.862838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:19:08.212 [2024-12-09 23:00:23.863032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.212 "name": "raid_bdev1", 00:19:08.212 "uuid": "473131f8-eb53-4ddd-b1cf-2ed4ead8d45f", 00:19:08.212 "strip_size_kb": 0, 00:19:08.212 "state": "online", 00:19:08.212 "raid_level": "raid1", 00:19:08.212 "superblock": true, 00:19:08.212 "num_base_bdevs": 4, 00:19:08.212 "num_base_bdevs_discovered": 4, 00:19:08.212 "num_base_bdevs_operational": 4, 00:19:08.212 "base_bdevs_list": [ 00:19:08.212 { 00:19:08.212 "name": "BaseBdev1", 00:19:08.212 "uuid": "a56d554c-6454-52a8-873c-bda34680e782", 00:19:08.212 "is_configured": true, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 }, 00:19:08.212 { 00:19:08.212 "name": "BaseBdev2", 00:19:08.212 "uuid": "c246e2b4-7957-50f2-9d49-d9b210da14a9", 00:19:08.212 "is_configured": true, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 }, 00:19:08.212 { 00:19:08.212 "name": "BaseBdev3", 00:19:08.212 "uuid": "123fedfe-73a5-51df-8f05-4984ee65abee", 00:19:08.212 "is_configured": true, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 }, 00:19:08.212 { 00:19:08.212 "name": "BaseBdev4", 00:19:08.212 "uuid": "94a4530d-db5a-5ef0-803e-fdd677a197cc", 00:19:08.212 "is_configured": true, 00:19:08.212 "data_offset": 2048, 00:19:08.212 "data_size": 63488 00:19:08.212 } 00:19:08.212 ] 00:19:08.212 }' 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.212 23:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.470 23:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:08.470 23:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:08.729 [2024-12-09 23:00:24.408006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.667 [2024-12-09 23:00:25.312743] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:09.667 [2024-12-09 23:00:25.312896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.667 [2024-12-09 23:00:25.313161] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.667 "name": "raid_bdev1", 00:19:09.667 "uuid": "473131f8-eb53-4ddd-b1cf-2ed4ead8d45f", 00:19:09.667 "strip_size_kb": 0, 00:19:09.667 "state": "online", 00:19:09.667 "raid_level": "raid1", 00:19:09.667 "superblock": true, 00:19:09.667 "num_base_bdevs": 4, 00:19:09.667 "num_base_bdevs_discovered": 3, 00:19:09.667 "num_base_bdevs_operational": 3, 00:19:09.667 "base_bdevs_list": [ 00:19:09.667 { 00:19:09.667 "name": null, 00:19:09.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.667 "is_configured": false, 00:19:09.667 "data_offset": 0, 00:19:09.667 "data_size": 63488 00:19:09.667 }, 00:19:09.667 { 00:19:09.667 "name": "BaseBdev2", 00:19:09.667 "uuid": "c246e2b4-7957-50f2-9d49-d9b210da14a9", 00:19:09.667 "is_configured": true, 00:19:09.667 "data_offset": 2048, 00:19:09.667 "data_size": 63488 00:19:09.667 }, 00:19:09.667 { 00:19:09.667 "name": "BaseBdev3", 00:19:09.667 "uuid": "123fedfe-73a5-51df-8f05-4984ee65abee", 00:19:09.667 "is_configured": true, 00:19:09.667 "data_offset": 2048, 00:19:09.667 "data_size": 63488 00:19:09.667 }, 00:19:09.667 { 00:19:09.667 "name": "BaseBdev4", 00:19:09.667 "uuid": "94a4530d-db5a-5ef0-803e-fdd677a197cc", 00:19:09.667 "is_configured": true, 00:19:09.667 "data_offset": 2048, 00:19:09.667 "data_size": 63488 00:19:09.667 } 00:19:09.667 ] 00:19:09.667 }' 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.667 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.239 [2024-12-09 23:00:25.833874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.239 [2024-12-09 23:00:25.833909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.239 [2024-12-09 23:00:25.836676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.239 [2024-12-09 23:00:25.836759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.239 [2024-12-09 23:00:25.836915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.239 [2024-12-09 23:00:25.836974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:19:10.239 { 00:19:10.239 "results": [ 00:19:10.239 { 00:19:10.239 "job": "raid_bdev1", 00:19:10.239 "core_mask": "0x1", 00:19:10.239 "workload": "randrw", 00:19:10.239 "percentage": 50, 00:19:10.239 "status": "finished", 00:19:10.239 "queue_depth": 1, 00:19:10.239 "io_size": 131072, 00:19:10.239 "runtime": 1.426672, 00:19:10.239 "iops": 10668.184418002176, 00:19:10.239 "mibps": 1333.523052250272, 00:19:10.239 "io_failed": 0, 00:19:10.239 "io_timeout": 0, 00:19:10.239 "avg_latency_us": 90.72736791971033, 00:19:10.239 "min_latency_us": 24.593886462882097, 00:19:10.239 "max_latency_us": 1931.7379912663755 00:19:10.239 } 00:19:10.239 ], 00:19:10.239 "core_count": 1 00:19:10.239 } 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75780 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75780 ']' 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75780 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75780 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.239 killing process with pid 75780 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75780' 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75780 00:19:10.239 [2024-12-09 23:00:25.880945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.239 23:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75780 00:19:10.503 [2024-12-09 23:00:26.239250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jC2qYSsn8j 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:11.893 00:19:11.893 real 0m4.926s 00:19:11.893 user 0m5.854s 00:19:11.893 sys 0m0.606s 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.893 23:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.893 ************************************ 00:19:11.893 END TEST raid_write_error_test 00:19:11.893 ************************************ 00:19:11.893 23:00:27 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:19:11.893 23:00:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:19:11.893 23:00:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:19:11.893 23:00:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:11.893 23:00:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.893 23:00:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.893 ************************************ 00:19:11.893 START TEST raid_rebuild_test 00:19:11.893 ************************************ 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75924 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75924 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75924 ']' 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.893 23:00:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.893 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:11.893 Zero copy mechanism will not be used. 00:19:11.893 [2024-12-09 23:00:27.680571] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:19:11.893 [2024-12-09 23:00:27.680701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75924 ] 00:19:12.155 [2024-12-09 23:00:27.837597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.155 [2024-12-09 23:00:27.960661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.415 [2024-12-09 23:00:28.166245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.415 [2024-12-09 23:00:28.166303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 BaseBdev1_malloc 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 [2024-12-09 23:00:28.660867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:12.999 [2024-12-09 23:00:28.660938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.999 [2024-12-09 23:00:28.660962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:12.999 [2024-12-09 23:00:28.660974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.999 [2024-12-09 23:00:28.663094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.999 [2024-12-09 23:00:28.663137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:12.999 BaseBdev1 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 BaseBdev2_malloc 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 [2024-12-09 23:00:28.715682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:12.999 [2024-12-09 23:00:28.715751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.999 [2024-12-09 23:00:28.715770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:12.999 [2024-12-09 23:00:28.715783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.999 [2024-12-09 23:00:28.717886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.999 [2024-12-09 23:00:28.717927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:12.999 BaseBdev2 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 spare_malloc 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 spare_delay 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 [2024-12-09 23:00:28.788589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.999 [2024-12-09 23:00:28.788663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.999 [2024-12-09 23:00:28.788686] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:12.999 [2024-12-09 23:00:28.788700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.999 [2024-12-09 23:00:28.790834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.999 [2024-12-09 23:00:28.790873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.999 spare 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.999 [2024-12-09 23:00:28.796619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.999 [2024-12-09 23:00:28.798525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.999 [2024-12-09 23:00:28.798626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:12.999 [2024-12-09 23:00:28.798642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:12.999 [2024-12-09 23:00:28.798927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:12.999 [2024-12-09 23:00:28.799092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:12.999 [2024-12-09 23:00:28.799109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:12.999 [2024-12-09 23:00:28.799267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.999 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.000 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.000 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.000 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.000 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.000 "name": "raid_bdev1", 00:19:13.000 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:13.000 "strip_size_kb": 0, 00:19:13.000 "state": "online", 00:19:13.000 "raid_level": "raid1", 00:19:13.000 "superblock": false, 00:19:13.000 "num_base_bdevs": 2, 00:19:13.000 "num_base_bdevs_discovered": 2, 00:19:13.000 "num_base_bdevs_operational": 2, 00:19:13.000 "base_bdevs_list": [ 00:19:13.000 { 00:19:13.000 "name": "BaseBdev1", 00:19:13.000 "uuid": "52818ab4-e243-529e-a9ed-b6e07294b545", 00:19:13.000 "is_configured": true, 00:19:13.000 "data_offset": 0, 00:19:13.000 "data_size": 65536 00:19:13.000 }, 00:19:13.000 { 00:19:13.000 "name": "BaseBdev2", 00:19:13.000 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:13.000 "is_configured": true, 00:19:13.000 "data_offset": 0, 00:19:13.000 "data_size": 65536 00:19:13.000 } 00:19:13.000 ] 00:19:13.000 }' 00:19:13.000 23:00:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.000 23:00:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.568 [2024-12-09 23:00:29.172568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.568 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:13.826 [2024-12-09 23:00:29.463808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:13.826 /dev/nbd0 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.826 1+0 records in 00:19:13.826 1+0 records out 00:19:13.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025782 s, 15.9 MB/s 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:13.826 23:00:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:19.107 65536+0 records in 00:19:19.107 65536+0 records out 00:19:19.107 33554432 bytes (34 MB, 32 MiB) copied, 4.72676 s, 7.1 MB/s 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:19.107 [2024-12-09 23:00:34.467327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.107 [2024-12-09 23:00:34.499547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.107 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.108 "name": "raid_bdev1", 00:19:19.108 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:19.108 "strip_size_kb": 0, 00:19:19.108 "state": "online", 00:19:19.108 "raid_level": "raid1", 00:19:19.108 "superblock": false, 00:19:19.108 "num_base_bdevs": 2, 00:19:19.108 "num_base_bdevs_discovered": 1, 00:19:19.108 "num_base_bdevs_operational": 1, 00:19:19.108 "base_bdevs_list": [ 00:19:19.108 { 00:19:19.108 "name": null, 00:19:19.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.108 "is_configured": false, 00:19:19.108 "data_offset": 0, 00:19:19.108 "data_size": 65536 00:19:19.108 }, 00:19:19.108 { 00:19:19.108 "name": "BaseBdev2", 00:19:19.108 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:19.108 "is_configured": true, 00:19:19.108 "data_offset": 0, 00:19:19.108 "data_size": 65536 00:19:19.108 } 00:19:19.108 ] 00:19:19.108 }' 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.108 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.369 [2024-12-09 23:00:34.966749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.369 [2024-12-09 23:00:34.983768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:19:19.369 23:00:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.369 23:00:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:19.369 [2024-12-09 23:00:34.985853] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.305 23:00:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.305 "name": "raid_bdev1", 00:19:20.305 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:20.305 "strip_size_kb": 0, 00:19:20.305 "state": "online", 00:19:20.305 "raid_level": "raid1", 00:19:20.305 "superblock": false, 00:19:20.305 "num_base_bdevs": 2, 00:19:20.305 "num_base_bdevs_discovered": 2, 00:19:20.305 "num_base_bdevs_operational": 2, 00:19:20.305 "process": { 00:19:20.305 "type": "rebuild", 00:19:20.305 "target": "spare", 00:19:20.305 "progress": { 00:19:20.305 "blocks": 20480, 00:19:20.305 "percent": 31 00:19:20.305 } 00:19:20.305 }, 00:19:20.305 "base_bdevs_list": [ 00:19:20.305 { 00:19:20.305 "name": "spare", 00:19:20.305 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:20.305 "is_configured": true, 00:19:20.305 "data_offset": 0, 00:19:20.305 "data_size": 65536 00:19:20.305 }, 00:19:20.305 { 00:19:20.305 "name": "BaseBdev2", 00:19:20.305 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:20.305 "is_configured": true, 00:19:20.305 "data_offset": 0, 00:19:20.305 "data_size": 65536 00:19:20.305 } 00:19:20.305 ] 00:19:20.305 }' 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.305 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.305 [2024-12-09 23:00:36.157405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.565 [2024-12-09 23:00:36.191827] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.565 [2024-12-09 23:00:36.191920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.565 [2024-12-09 23:00:36.191938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.565 [2024-12-09 23:00:36.191948] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.565 "name": "raid_bdev1", 00:19:20.565 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:20.565 "strip_size_kb": 0, 00:19:20.565 "state": "online", 00:19:20.565 "raid_level": "raid1", 00:19:20.565 "superblock": false, 00:19:20.565 "num_base_bdevs": 2, 00:19:20.565 "num_base_bdevs_discovered": 1, 00:19:20.565 "num_base_bdevs_operational": 1, 00:19:20.565 "base_bdevs_list": [ 00:19:20.565 { 00:19:20.565 "name": null, 00:19:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.565 "is_configured": false, 00:19:20.565 "data_offset": 0, 00:19:20.565 "data_size": 65536 00:19:20.565 }, 00:19:20.565 { 00:19:20.565 "name": "BaseBdev2", 00:19:20.565 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:20.565 "is_configured": true, 00:19:20.565 "data_offset": 0, 00:19:20.565 "data_size": 65536 00:19:20.565 } 00:19:20.565 ] 00:19:20.565 }' 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.565 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.082 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.082 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.082 "name": "raid_bdev1", 00:19:21.082 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:21.082 "strip_size_kb": 0, 00:19:21.082 "state": "online", 00:19:21.082 "raid_level": "raid1", 00:19:21.082 "superblock": false, 00:19:21.082 "num_base_bdevs": 2, 00:19:21.082 "num_base_bdevs_discovered": 1, 00:19:21.082 "num_base_bdevs_operational": 1, 00:19:21.082 "base_bdevs_list": [ 00:19:21.082 { 00:19:21.082 "name": null, 00:19:21.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.082 "is_configured": false, 00:19:21.083 "data_offset": 0, 00:19:21.083 "data_size": 65536 00:19:21.083 }, 00:19:21.083 { 00:19:21.083 "name": "BaseBdev2", 00:19:21.083 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:21.083 "is_configured": true, 00:19:21.083 "data_offset": 0, 00:19:21.083 "data_size": 65536 00:19:21.083 } 00:19:21.083 ] 00:19:21.083 }' 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.083 [2024-12-09 23:00:36.809051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.083 [2024-12-09 23:00:36.828052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.083 23:00:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:21.083 [2024-12-09 23:00:36.830207] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.022 23:00:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.284 "name": "raid_bdev1", 00:19:22.284 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:22.284 "strip_size_kb": 0, 00:19:22.284 "state": "online", 00:19:22.284 "raid_level": "raid1", 00:19:22.284 "superblock": false, 00:19:22.284 "num_base_bdevs": 2, 00:19:22.284 "num_base_bdevs_discovered": 2, 00:19:22.284 "num_base_bdevs_operational": 2, 00:19:22.284 "process": { 00:19:22.284 "type": "rebuild", 00:19:22.284 "target": "spare", 00:19:22.284 "progress": { 00:19:22.284 "blocks": 20480, 00:19:22.284 "percent": 31 00:19:22.284 } 00:19:22.284 }, 00:19:22.284 "base_bdevs_list": [ 00:19:22.284 { 00:19:22.284 "name": "spare", 00:19:22.284 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:22.284 "is_configured": true, 00:19:22.284 "data_offset": 0, 00:19:22.284 "data_size": 65536 00:19:22.284 }, 00:19:22.284 { 00:19:22.284 "name": "BaseBdev2", 00:19:22.284 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:22.284 "is_configured": true, 00:19:22.284 "data_offset": 0, 00:19:22.284 "data_size": 65536 00:19:22.284 } 00:19:22.284 ] 00:19:22.284 }' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.284 23:00:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.284 23:00:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.284 "name": "raid_bdev1", 00:19:22.284 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:22.284 "strip_size_kb": 0, 00:19:22.284 "state": "online", 00:19:22.284 "raid_level": "raid1", 00:19:22.284 "superblock": false, 00:19:22.284 "num_base_bdevs": 2, 00:19:22.284 "num_base_bdevs_discovered": 2, 00:19:22.284 "num_base_bdevs_operational": 2, 00:19:22.284 "process": { 00:19:22.284 "type": "rebuild", 00:19:22.284 "target": "spare", 00:19:22.284 "progress": { 00:19:22.284 "blocks": 22528, 00:19:22.284 "percent": 34 00:19:22.284 } 00:19:22.284 }, 00:19:22.284 "base_bdevs_list": [ 00:19:22.284 { 00:19:22.284 "name": "spare", 00:19:22.284 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:22.284 "is_configured": true, 00:19:22.284 "data_offset": 0, 00:19:22.284 "data_size": 65536 00:19:22.284 }, 00:19:22.284 { 00:19:22.284 "name": "BaseBdev2", 00:19:22.284 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:22.284 "is_configured": true, 00:19:22.284 "data_offset": 0, 00:19:22.284 "data_size": 65536 00:19:22.284 } 00:19:22.284 ] 00:19:22.284 }' 00:19:22.284 23:00:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.284 23:00:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.284 23:00:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.284 23:00:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.284 23:00:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.660 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.661 "name": "raid_bdev1", 00:19:23.661 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:23.661 "strip_size_kb": 0, 00:19:23.661 "state": "online", 00:19:23.661 "raid_level": "raid1", 00:19:23.661 "superblock": false, 00:19:23.661 "num_base_bdevs": 2, 00:19:23.661 "num_base_bdevs_discovered": 2, 00:19:23.661 "num_base_bdevs_operational": 2, 00:19:23.661 "process": { 00:19:23.661 "type": "rebuild", 00:19:23.661 "target": "spare", 00:19:23.661 "progress": { 00:19:23.661 "blocks": 45056, 00:19:23.661 "percent": 68 00:19:23.661 } 00:19:23.661 }, 00:19:23.661 "base_bdevs_list": [ 00:19:23.661 { 00:19:23.661 "name": "spare", 00:19:23.661 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:23.661 "is_configured": true, 00:19:23.661 "data_offset": 0, 00:19:23.661 "data_size": 65536 00:19:23.661 }, 00:19:23.661 { 00:19:23.661 "name": "BaseBdev2", 00:19:23.661 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:23.661 "is_configured": true, 00:19:23.661 "data_offset": 0, 00:19:23.661 "data_size": 65536 00:19:23.661 } 00:19:23.661 ] 00:19:23.661 }' 00:19:23.661 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.661 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.661 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.661 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.661 23:00:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:24.230 [2024-12-09 23:00:40.045358] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:24.230 [2024-12-09 23:00:40.045577] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:24.230 [2024-12-09 23:00:40.045666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.487 "name": "raid_bdev1", 00:19:24.487 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:24.487 "strip_size_kb": 0, 00:19:24.487 "state": "online", 00:19:24.487 "raid_level": "raid1", 00:19:24.487 "superblock": false, 00:19:24.487 "num_base_bdevs": 2, 00:19:24.487 "num_base_bdevs_discovered": 2, 00:19:24.487 "num_base_bdevs_operational": 2, 00:19:24.487 "base_bdevs_list": [ 00:19:24.487 { 00:19:24.487 "name": "spare", 00:19:24.487 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:24.487 "is_configured": true, 00:19:24.487 "data_offset": 0, 00:19:24.487 "data_size": 65536 00:19:24.487 }, 00:19:24.487 { 00:19:24.487 "name": "BaseBdev2", 00:19:24.487 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:24.487 "is_configured": true, 00:19:24.487 "data_offset": 0, 00:19:24.487 "data_size": 65536 00:19:24.487 } 00:19:24.487 ] 00:19:24.487 }' 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:24.487 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.746 "name": "raid_bdev1", 00:19:24.746 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:24.746 "strip_size_kb": 0, 00:19:24.746 "state": "online", 00:19:24.746 "raid_level": "raid1", 00:19:24.746 "superblock": false, 00:19:24.746 "num_base_bdevs": 2, 00:19:24.746 "num_base_bdevs_discovered": 2, 00:19:24.746 "num_base_bdevs_operational": 2, 00:19:24.746 "base_bdevs_list": [ 00:19:24.746 { 00:19:24.746 "name": "spare", 00:19:24.746 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:24.746 "is_configured": true, 00:19:24.746 "data_offset": 0, 00:19:24.746 "data_size": 65536 00:19:24.746 }, 00:19:24.746 { 00:19:24.746 "name": "BaseBdev2", 00:19:24.746 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:24.746 "is_configured": true, 00:19:24.746 "data_offset": 0, 00:19:24.746 "data_size": 65536 00:19:24.746 } 00:19:24.746 ] 00:19:24.746 }' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.746 "name": "raid_bdev1", 00:19:24.746 "uuid": "1acf2b25-aac7-45ad-a1bc-67b419da1000", 00:19:24.746 "strip_size_kb": 0, 00:19:24.746 "state": "online", 00:19:24.746 "raid_level": "raid1", 00:19:24.746 "superblock": false, 00:19:24.746 "num_base_bdevs": 2, 00:19:24.746 "num_base_bdevs_discovered": 2, 00:19:24.746 "num_base_bdevs_operational": 2, 00:19:24.746 "base_bdevs_list": [ 00:19:24.746 { 00:19:24.746 "name": "spare", 00:19:24.746 "uuid": "f8650918-49e6-5048-b419-5aeb6bffa20c", 00:19:24.746 "is_configured": true, 00:19:24.746 "data_offset": 0, 00:19:24.746 "data_size": 65536 00:19:24.746 }, 00:19:24.746 { 00:19:24.746 "name": "BaseBdev2", 00:19:24.746 "uuid": "660295ca-af61-5fe6-bf26-bc21431ddeef", 00:19:24.746 "is_configured": true, 00:19:24.746 "data_offset": 0, 00:19:24.746 "data_size": 65536 00:19:24.746 } 00:19:24.746 ] 00:19:24.746 }' 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.746 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.315 [2024-12-09 23:00:40.961155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.315 [2024-12-09 23:00:40.961207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.315 [2024-12-09 23:00:40.961301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.315 [2024-12-09 23:00:40.961373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.315 [2024-12-09 23:00:40.961383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.315 23:00:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:25.315 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:25.574 /dev/nbd0 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.574 1+0 records in 00:19:25.574 1+0 records out 00:19:25.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045035 s, 9.1 MB/s 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:25.574 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:25.833 /dev/nbd1 00:19:25.833 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.834 1+0 records in 00:19:25.834 1+0 records out 00:19:25.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057873 s, 7.1 MB/s 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:25.834 23:00:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.093 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.352 23:00:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75924 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75924 ']' 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75924 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75924 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75924' 00:19:26.613 killing process with pid 75924 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75924 00:19:26.613 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.613 00:19:26.613 Latency(us) 00:19:26.613 [2024-12-09T23:00:42.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.613 [2024-12-09T23:00:42.469Z] =================================================================================================================== 00:19:26.613 [2024-12-09T23:00:42.469Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.613 [2024-12-09 23:00:42.272710] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.613 23:00:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75924 00:19:26.871 [2024-12-09 23:00:42.605568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.260 ************************************ 00:19:28.260 END TEST raid_rebuild_test 00:19:28.260 ************************************ 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:28.260 00:19:28.260 real 0m16.238s 00:19:28.260 user 0m18.075s 00:19:28.260 sys 0m3.210s 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.260 23:00:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:19:28.260 23:00:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:28.260 23:00:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.260 23:00:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.260 ************************************ 00:19:28.260 START TEST raid_rebuild_test_sb 00:19:28.260 ************************************ 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76353 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76353 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76353 ']' 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.260 23:00:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.260 [2024-12-09 23:00:43.987747] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:19:28.260 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:28.260 Zero copy mechanism will not be used. 00:19:28.260 [2024-12-09 23:00:43.987940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76353 ] 00:19:28.519 [2024-12-09 23:00:44.160535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.519 [2024-12-09 23:00:44.285481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.778 [2024-12-09 23:00:44.504476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.778 [2024-12-09 23:00:44.504638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.037 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.037 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:29.037 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:29.037 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:29.037 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.037 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.307 BaseBdev1_malloc 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.307 [2024-12-09 23:00:44.933633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:29.307 [2024-12-09 23:00:44.933771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.307 [2024-12-09 23:00:44.933805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:29.307 [2024-12-09 23:00:44.933822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.307 [2024-12-09 23:00:44.936518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.307 [2024-12-09 23:00:44.936563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:29.307 BaseBdev1 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.307 BaseBdev2_malloc 00:19:29.307 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.308 [2024-12-09 23:00:44.990828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:29.308 [2024-12-09 23:00:44.990894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.308 [2024-12-09 23:00:44.990914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:29.308 [2024-12-09 23:00:44.990926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.308 [2024-12-09 23:00:44.993109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.308 [2024-12-09 23:00:44.993151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:29.308 BaseBdev2 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.308 23:00:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.308 spare_malloc 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.308 spare_delay 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.308 [2024-12-09 23:00:45.071962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:29.308 [2024-12-09 23:00:45.072033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.308 [2024-12-09 23:00:45.072056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:29.308 [2024-12-09 23:00:45.072069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.308 [2024-12-09 23:00:45.074438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.308 [2024-12-09 23:00:45.074583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:29.308 spare 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.308 [2024-12-09 23:00:45.084005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.308 [2024-12-09 23:00:45.086008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.308 [2024-12-09 23:00:45.086274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:29.308 [2024-12-09 23:00:45.086295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:29.308 [2024-12-09 23:00:45.086592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:29.308 [2024-12-09 23:00:45.086774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:29.308 [2024-12-09 23:00:45.086785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:29.308 [2024-12-09 23:00:45.086952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.308 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.308 "name": "raid_bdev1", 00:19:29.308 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:29.309 "strip_size_kb": 0, 00:19:29.309 "state": "online", 00:19:29.309 "raid_level": "raid1", 00:19:29.309 "superblock": true, 00:19:29.309 "num_base_bdevs": 2, 00:19:29.309 "num_base_bdevs_discovered": 2, 00:19:29.309 "num_base_bdevs_operational": 2, 00:19:29.309 "base_bdevs_list": [ 00:19:29.309 { 00:19:29.309 "name": "BaseBdev1", 00:19:29.309 "uuid": "017465af-7869-58c1-a488-5d4b20094d5b", 00:19:29.309 "is_configured": true, 00:19:29.309 "data_offset": 2048, 00:19:29.309 "data_size": 63488 00:19:29.309 }, 00:19:29.309 { 00:19:29.309 "name": "BaseBdev2", 00:19:29.309 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:29.309 "is_configured": true, 00:19:29.309 "data_offset": 2048, 00:19:29.309 "data_size": 63488 00:19:29.309 } 00:19:29.309 ] 00:19:29.309 }' 00:19:29.309 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.309 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.875 [2024-12-09 23:00:45.507687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:29.875 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:30.134 [2024-12-09 23:00:45.806895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:30.134 /dev/nbd0 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.134 1+0 records in 00:19:30.134 1+0 records out 00:19:30.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436473 s, 9.4 MB/s 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:30.134 23:00:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:35.409 63488+0 records in 00:19:35.409 63488+0 records out 00:19:35.409 32505856 bytes (33 MB, 31 MiB) copied, 4.82271 s, 6.7 MB/s 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:35.409 [2024-12-09 23:00:50.937788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 [2024-12-09 23:00:50.957889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 23:00:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 23:00:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.409 "name": "raid_bdev1", 00:19:35.409 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:35.409 "strip_size_kb": 0, 00:19:35.409 "state": "online", 00:19:35.409 "raid_level": "raid1", 00:19:35.409 "superblock": true, 00:19:35.409 "num_base_bdevs": 2, 00:19:35.409 "num_base_bdevs_discovered": 1, 00:19:35.409 "num_base_bdevs_operational": 1, 00:19:35.409 "base_bdevs_list": [ 00:19:35.409 { 00:19:35.409 "name": null, 00:19:35.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.409 "is_configured": false, 00:19:35.409 "data_offset": 0, 00:19:35.409 "data_size": 63488 00:19:35.409 }, 00:19:35.409 { 00:19:35.409 "name": "BaseBdev2", 00:19:35.409 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:35.409 "is_configured": true, 00:19:35.409 "data_offset": 2048, 00:19:35.409 "data_size": 63488 00:19:35.409 } 00:19:35.409 ] 00:19:35.409 }' 00:19:35.409 23:00:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.409 23:00:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.668 23:00:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.668 23:00:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.668 23:00:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.668 [2024-12-09 23:00:51.425097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.668 [2024-12-09 23:00:51.442443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:19:35.668 23:00:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.668 23:00:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:35.668 [2024-12-09 23:00:51.444319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.608 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.866 "name": "raid_bdev1", 00:19:36.866 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:36.866 "strip_size_kb": 0, 00:19:36.866 "state": "online", 00:19:36.866 "raid_level": "raid1", 00:19:36.866 "superblock": true, 00:19:36.866 "num_base_bdevs": 2, 00:19:36.866 "num_base_bdevs_discovered": 2, 00:19:36.866 "num_base_bdevs_operational": 2, 00:19:36.866 "process": { 00:19:36.866 "type": "rebuild", 00:19:36.866 "target": "spare", 00:19:36.866 "progress": { 00:19:36.866 "blocks": 20480, 00:19:36.866 "percent": 32 00:19:36.866 } 00:19:36.866 }, 00:19:36.866 "base_bdevs_list": [ 00:19:36.866 { 00:19:36.866 "name": "spare", 00:19:36.866 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:36.866 "is_configured": true, 00:19:36.866 "data_offset": 2048, 00:19:36.866 "data_size": 63488 00:19:36.866 }, 00:19:36.866 { 00:19:36.866 "name": "BaseBdev2", 00:19:36.866 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:36.866 "is_configured": true, 00:19:36.866 "data_offset": 2048, 00:19:36.866 "data_size": 63488 00:19:36.866 } 00:19:36.866 ] 00:19:36.866 }' 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 [2024-12-09 23:00:52.592611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.866 [2024-12-09 23:00:52.650287] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:36.866 [2024-12-09 23:00:52.650491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.866 [2024-12-09 23:00:52.650541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.866 [2024-12-09 23:00:52.650579] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.866 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.125 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.125 "name": "raid_bdev1", 00:19:37.125 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:37.125 "strip_size_kb": 0, 00:19:37.125 "state": "online", 00:19:37.125 "raid_level": "raid1", 00:19:37.125 "superblock": true, 00:19:37.125 "num_base_bdevs": 2, 00:19:37.125 "num_base_bdevs_discovered": 1, 00:19:37.125 "num_base_bdevs_operational": 1, 00:19:37.125 "base_bdevs_list": [ 00:19:37.125 { 00:19:37.125 "name": null, 00:19:37.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.125 "is_configured": false, 00:19:37.125 "data_offset": 0, 00:19:37.125 "data_size": 63488 00:19:37.125 }, 00:19:37.125 { 00:19:37.125 "name": "BaseBdev2", 00:19:37.125 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:37.125 "is_configured": true, 00:19:37.125 "data_offset": 2048, 00:19:37.125 "data_size": 63488 00:19:37.125 } 00:19:37.125 ] 00:19:37.125 }' 00:19:37.125 23:00:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.125 23:00:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.384 23:00:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.642 "name": "raid_bdev1", 00:19:37.642 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:37.642 "strip_size_kb": 0, 00:19:37.642 "state": "online", 00:19:37.642 "raid_level": "raid1", 00:19:37.642 "superblock": true, 00:19:37.642 "num_base_bdevs": 2, 00:19:37.642 "num_base_bdevs_discovered": 1, 00:19:37.642 "num_base_bdevs_operational": 1, 00:19:37.642 "base_bdevs_list": [ 00:19:37.642 { 00:19:37.642 "name": null, 00:19:37.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.642 "is_configured": false, 00:19:37.642 "data_offset": 0, 00:19:37.642 "data_size": 63488 00:19:37.642 }, 00:19:37.642 { 00:19:37.642 "name": "BaseBdev2", 00:19:37.642 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:37.642 "is_configured": true, 00:19:37.642 "data_offset": 2048, 00:19:37.642 "data_size": 63488 00:19:37.642 } 00:19:37.642 ] 00:19:37.642 }' 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.642 [2024-12-09 23:00:53.364370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.642 [2024-12-09 23:00:53.381086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.642 23:00:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:37.642 [2024-12-09 23:00:53.383180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.576 23:00:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.835 "name": "raid_bdev1", 00:19:38.835 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:38.835 "strip_size_kb": 0, 00:19:38.835 "state": "online", 00:19:38.835 "raid_level": "raid1", 00:19:38.835 "superblock": true, 00:19:38.835 "num_base_bdevs": 2, 00:19:38.835 "num_base_bdevs_discovered": 2, 00:19:38.835 "num_base_bdevs_operational": 2, 00:19:38.835 "process": { 00:19:38.835 "type": "rebuild", 00:19:38.835 "target": "spare", 00:19:38.835 "progress": { 00:19:38.835 "blocks": 20480, 00:19:38.835 "percent": 32 00:19:38.835 } 00:19:38.835 }, 00:19:38.835 "base_bdevs_list": [ 00:19:38.835 { 00:19:38.835 "name": "spare", 00:19:38.835 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:38.835 "is_configured": true, 00:19:38.835 "data_offset": 2048, 00:19:38.835 "data_size": 63488 00:19:38.835 }, 00:19:38.835 { 00:19:38.835 "name": "BaseBdev2", 00:19:38.835 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:38.835 "is_configured": true, 00:19:38.835 "data_offset": 2048, 00:19:38.835 "data_size": 63488 00:19:38.835 } 00:19:38.835 ] 00:19:38.835 }' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:38.835 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.835 "name": "raid_bdev1", 00:19:38.835 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:38.835 "strip_size_kb": 0, 00:19:38.835 "state": "online", 00:19:38.835 "raid_level": "raid1", 00:19:38.835 "superblock": true, 00:19:38.835 "num_base_bdevs": 2, 00:19:38.835 "num_base_bdevs_discovered": 2, 00:19:38.835 "num_base_bdevs_operational": 2, 00:19:38.835 "process": { 00:19:38.835 "type": "rebuild", 00:19:38.835 "target": "spare", 00:19:38.835 "progress": { 00:19:38.835 "blocks": 22528, 00:19:38.835 "percent": 35 00:19:38.835 } 00:19:38.835 }, 00:19:38.835 "base_bdevs_list": [ 00:19:38.835 { 00:19:38.835 "name": "spare", 00:19:38.835 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:38.835 "is_configured": true, 00:19:38.835 "data_offset": 2048, 00:19:38.835 "data_size": 63488 00:19:38.835 }, 00:19:38.835 { 00:19:38.835 "name": "BaseBdev2", 00:19:38.835 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:38.835 "is_configured": true, 00:19:38.835 "data_offset": 2048, 00:19:38.835 "data_size": 63488 00:19:38.835 } 00:19:38.835 ] 00:19:38.835 }' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.835 23:00:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.215 "name": "raid_bdev1", 00:19:40.215 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:40.215 "strip_size_kb": 0, 00:19:40.215 "state": "online", 00:19:40.215 "raid_level": "raid1", 00:19:40.215 "superblock": true, 00:19:40.215 "num_base_bdevs": 2, 00:19:40.215 "num_base_bdevs_discovered": 2, 00:19:40.215 "num_base_bdevs_operational": 2, 00:19:40.215 "process": { 00:19:40.215 "type": "rebuild", 00:19:40.215 "target": "spare", 00:19:40.215 "progress": { 00:19:40.215 "blocks": 45056, 00:19:40.215 "percent": 70 00:19:40.215 } 00:19:40.215 }, 00:19:40.215 "base_bdevs_list": [ 00:19:40.215 { 00:19:40.215 "name": "spare", 00:19:40.215 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:40.215 "is_configured": true, 00:19:40.215 "data_offset": 2048, 00:19:40.215 "data_size": 63488 00:19:40.215 }, 00:19:40.215 { 00:19:40.215 "name": "BaseBdev2", 00:19:40.215 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:40.215 "is_configured": true, 00:19:40.215 "data_offset": 2048, 00:19:40.215 "data_size": 63488 00:19:40.215 } 00:19:40.215 ] 00:19:40.215 }' 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.215 23:00:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.782 [2024-12-09 23:00:56.498001] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:40.782 [2024-12-09 23:00:56.498178] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:40.782 [2024-12-09 23:00:56.498329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.041 "name": "raid_bdev1", 00:19:41.041 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:41.041 "strip_size_kb": 0, 00:19:41.041 "state": "online", 00:19:41.041 "raid_level": "raid1", 00:19:41.041 "superblock": true, 00:19:41.041 "num_base_bdevs": 2, 00:19:41.041 "num_base_bdevs_discovered": 2, 00:19:41.041 "num_base_bdevs_operational": 2, 00:19:41.041 "base_bdevs_list": [ 00:19:41.041 { 00:19:41.041 "name": "spare", 00:19:41.041 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:41.041 "is_configured": true, 00:19:41.041 "data_offset": 2048, 00:19:41.041 "data_size": 63488 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "name": "BaseBdev2", 00:19:41.041 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:41.041 "is_configured": true, 00:19:41.041 "data_offset": 2048, 00:19:41.041 "data_size": 63488 00:19:41.041 } 00:19:41.041 ] 00:19:41.041 }' 00:19:41.041 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.300 23:00:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.300 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.300 "name": "raid_bdev1", 00:19:41.300 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:41.300 "strip_size_kb": 0, 00:19:41.300 "state": "online", 00:19:41.301 "raid_level": "raid1", 00:19:41.301 "superblock": true, 00:19:41.301 "num_base_bdevs": 2, 00:19:41.301 "num_base_bdevs_discovered": 2, 00:19:41.301 "num_base_bdevs_operational": 2, 00:19:41.301 "base_bdevs_list": [ 00:19:41.301 { 00:19:41.301 "name": "spare", 00:19:41.301 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:41.301 "is_configured": true, 00:19:41.301 "data_offset": 2048, 00:19:41.301 "data_size": 63488 00:19:41.301 }, 00:19:41.301 { 00:19:41.301 "name": "BaseBdev2", 00:19:41.301 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:41.301 "is_configured": true, 00:19:41.301 "data_offset": 2048, 00:19:41.301 "data_size": 63488 00:19:41.301 } 00:19:41.301 ] 00:19:41.301 }' 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.301 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.563 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.563 "name": "raid_bdev1", 00:19:41.563 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:41.563 "strip_size_kb": 0, 00:19:41.563 "state": "online", 00:19:41.563 "raid_level": "raid1", 00:19:41.563 "superblock": true, 00:19:41.563 "num_base_bdevs": 2, 00:19:41.563 "num_base_bdevs_discovered": 2, 00:19:41.563 "num_base_bdevs_operational": 2, 00:19:41.563 "base_bdevs_list": [ 00:19:41.563 { 00:19:41.563 "name": "spare", 00:19:41.563 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:41.563 "is_configured": true, 00:19:41.563 "data_offset": 2048, 00:19:41.563 "data_size": 63488 00:19:41.563 }, 00:19:41.563 { 00:19:41.563 "name": "BaseBdev2", 00:19:41.563 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:41.563 "is_configured": true, 00:19:41.563 "data_offset": 2048, 00:19:41.563 "data_size": 63488 00:19:41.563 } 00:19:41.563 ] 00:19:41.563 }' 00:19:41.563 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.563 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.824 [2024-12-09 23:00:57.620715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.824 [2024-12-09 23:00:57.620858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.824 [2024-12-09 23:00:57.620980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.824 [2024-12-09 23:00:57.621103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.824 [2024-12-09 23:00:57.621165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:41.824 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:42.085 /dev/nbd0 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.346 1+0 records in 00:19:42.346 1+0 records out 00:19:42.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038572 s, 10.6 MB/s 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.346 23:00:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:42.346 /dev/nbd1 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.607 1+0 records in 00:19:42.607 1+0 records out 00:19:42.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643747 s, 6.4 MB/s 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.607 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.867 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.127 [2024-12-09 23:00:58.901689] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.127 [2024-12-09 23:00:58.901784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.127 [2024-12-09 23:00:58.901823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:43.127 [2024-12-09 23:00:58.901851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.127 [2024-12-09 23:00:58.904179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.127 [2024-12-09 23:00:58.904253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.127 [2024-12-09 23:00:58.904372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:43.127 [2024-12-09 23:00:58.904475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.127 [2024-12-09 23:00:58.904658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.127 spare 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.127 23:00:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.386 [2024-12-09 23:00:59.004607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:43.386 [2024-12-09 23:00:59.004721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:43.386 [2024-12-09 23:00:59.005078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:19:43.386 [2024-12-09 23:00:59.005303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:43.386 [2024-12-09 23:00:59.005319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:43.386 [2024-12-09 23:00:59.005582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.386 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.386 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.387 "name": "raid_bdev1", 00:19:43.387 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:43.387 "strip_size_kb": 0, 00:19:43.387 "state": "online", 00:19:43.387 "raid_level": "raid1", 00:19:43.387 "superblock": true, 00:19:43.387 "num_base_bdevs": 2, 00:19:43.387 "num_base_bdevs_discovered": 2, 00:19:43.387 "num_base_bdevs_operational": 2, 00:19:43.387 "base_bdevs_list": [ 00:19:43.387 { 00:19:43.387 "name": "spare", 00:19:43.387 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:43.387 "is_configured": true, 00:19:43.387 "data_offset": 2048, 00:19:43.387 "data_size": 63488 00:19:43.387 }, 00:19:43.387 { 00:19:43.387 "name": "BaseBdev2", 00:19:43.387 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:43.387 "is_configured": true, 00:19:43.387 "data_offset": 2048, 00:19:43.387 "data_size": 63488 00:19:43.387 } 00:19:43.387 ] 00:19:43.387 }' 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.387 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.682 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.682 "name": "raid_bdev1", 00:19:43.682 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:43.682 "strip_size_kb": 0, 00:19:43.682 "state": "online", 00:19:43.682 "raid_level": "raid1", 00:19:43.682 "superblock": true, 00:19:43.682 "num_base_bdevs": 2, 00:19:43.682 "num_base_bdevs_discovered": 2, 00:19:43.683 "num_base_bdevs_operational": 2, 00:19:43.683 "base_bdevs_list": [ 00:19:43.683 { 00:19:43.683 "name": "spare", 00:19:43.683 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:43.683 "is_configured": true, 00:19:43.683 "data_offset": 2048, 00:19:43.683 "data_size": 63488 00:19:43.683 }, 00:19:43.683 { 00:19:43.683 "name": "BaseBdev2", 00:19:43.683 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:43.683 "is_configured": true, 00:19:43.683 "data_offset": 2048, 00:19:43.683 "data_size": 63488 00:19:43.683 } 00:19:43.683 ] 00:19:43.683 }' 00:19:43.683 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.942 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.943 [2024-12-09 23:00:59.660567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.943 "name": "raid_bdev1", 00:19:43.943 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:43.943 "strip_size_kb": 0, 00:19:43.943 "state": "online", 00:19:43.943 "raid_level": "raid1", 00:19:43.943 "superblock": true, 00:19:43.943 "num_base_bdevs": 2, 00:19:43.943 "num_base_bdevs_discovered": 1, 00:19:43.943 "num_base_bdevs_operational": 1, 00:19:43.943 "base_bdevs_list": [ 00:19:43.943 { 00:19:43.943 "name": null, 00:19:43.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.943 "is_configured": false, 00:19:43.943 "data_offset": 0, 00:19:43.943 "data_size": 63488 00:19:43.943 }, 00:19:43.943 { 00:19:43.943 "name": "BaseBdev2", 00:19:43.943 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:43.943 "is_configured": true, 00:19:43.943 "data_offset": 2048, 00:19:43.943 "data_size": 63488 00:19:43.943 } 00:19:43.943 ] 00:19:43.943 }' 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.943 23:00:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.511 23:01:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.511 23:01:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.511 23:01:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.511 [2024-12-09 23:01:00.107797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.511 [2024-12-09 23:01:00.108017] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:44.511 [2024-12-09 23:01:00.108034] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:44.511 [2024-12-09 23:01:00.108079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.511 [2024-12-09 23:01:00.124456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:19:44.511 23:01:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.511 23:01:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:44.511 [2024-12-09 23:01:00.126622] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.450 "name": "raid_bdev1", 00:19:45.450 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:45.450 "strip_size_kb": 0, 00:19:45.450 "state": "online", 00:19:45.450 "raid_level": "raid1", 00:19:45.450 "superblock": true, 00:19:45.450 "num_base_bdevs": 2, 00:19:45.450 "num_base_bdevs_discovered": 2, 00:19:45.450 "num_base_bdevs_operational": 2, 00:19:45.450 "process": { 00:19:45.450 "type": "rebuild", 00:19:45.450 "target": "spare", 00:19:45.450 "progress": { 00:19:45.450 "blocks": 20480, 00:19:45.450 "percent": 32 00:19:45.450 } 00:19:45.450 }, 00:19:45.450 "base_bdevs_list": [ 00:19:45.450 { 00:19:45.450 "name": "spare", 00:19:45.450 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:45.450 "is_configured": true, 00:19:45.450 "data_offset": 2048, 00:19:45.450 "data_size": 63488 00:19:45.450 }, 00:19:45.450 { 00:19:45.450 "name": "BaseBdev2", 00:19:45.450 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:45.450 "is_configured": true, 00:19:45.450 "data_offset": 2048, 00:19:45.450 "data_size": 63488 00:19:45.450 } 00:19:45.450 ] 00:19:45.450 }' 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.450 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.450 [2024-12-09 23:01:01.290808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.709 [2024-12-09 23:01:01.332521] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:45.709 [2024-12-09 23:01:01.332709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.709 [2024-12-09 23:01:01.332767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.709 [2024-12-09 23:01:01.332798] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.709 "name": "raid_bdev1", 00:19:45.709 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:45.709 "strip_size_kb": 0, 00:19:45.709 "state": "online", 00:19:45.709 "raid_level": "raid1", 00:19:45.709 "superblock": true, 00:19:45.709 "num_base_bdevs": 2, 00:19:45.709 "num_base_bdevs_discovered": 1, 00:19:45.709 "num_base_bdevs_operational": 1, 00:19:45.709 "base_bdevs_list": [ 00:19:45.709 { 00:19:45.709 "name": null, 00:19:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.709 "is_configured": false, 00:19:45.709 "data_offset": 0, 00:19:45.709 "data_size": 63488 00:19:45.709 }, 00:19:45.709 { 00:19:45.709 "name": "BaseBdev2", 00:19:45.709 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:45.709 "is_configured": true, 00:19:45.709 "data_offset": 2048, 00:19:45.709 "data_size": 63488 00:19:45.709 } 00:19:45.709 ] 00:19:45.709 }' 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.709 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.280 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.280 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.280 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.280 [2024-12-09 23:01:01.880648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.280 [2024-12-09 23:01:01.880786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.280 [2024-12-09 23:01:01.880834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:46.280 [2024-12-09 23:01:01.880875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.280 [2024-12-09 23:01:01.881435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.280 [2024-12-09 23:01:01.881531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.280 [2024-12-09 23:01:01.881678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:46.280 [2024-12-09 23:01:01.881729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:46.280 [2024-12-09 23:01:01.881779] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:46.280 [2024-12-09 23:01:01.881846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.280 [2024-12-09 23:01:01.901220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:46.280 spare 00:19:46.280 23:01:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.280 23:01:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:46.280 [2024-12-09 23:01:01.903627] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.222 "name": "raid_bdev1", 00:19:47.222 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:47.222 "strip_size_kb": 0, 00:19:47.222 "state": "online", 00:19:47.222 "raid_level": "raid1", 00:19:47.222 "superblock": true, 00:19:47.222 "num_base_bdevs": 2, 00:19:47.222 "num_base_bdevs_discovered": 2, 00:19:47.222 "num_base_bdevs_operational": 2, 00:19:47.222 "process": { 00:19:47.222 "type": "rebuild", 00:19:47.222 "target": "spare", 00:19:47.222 "progress": { 00:19:47.222 "blocks": 20480, 00:19:47.222 "percent": 32 00:19:47.222 } 00:19:47.222 }, 00:19:47.222 "base_bdevs_list": [ 00:19:47.222 { 00:19:47.222 "name": "spare", 00:19:47.222 "uuid": "38d8bfd9-5a97-5f99-b16e-f699581debce", 00:19:47.222 "is_configured": true, 00:19:47.222 "data_offset": 2048, 00:19:47.222 "data_size": 63488 00:19:47.222 }, 00:19:47.222 { 00:19:47.222 "name": "BaseBdev2", 00:19:47.222 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:47.222 "is_configured": true, 00:19:47.222 "data_offset": 2048, 00:19:47.222 "data_size": 63488 00:19:47.222 } 00:19:47.222 ] 00:19:47.222 }' 00:19:47.222 23:01:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.222 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.222 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.222 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.222 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:47.222 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.222 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.222 [2024-12-09 23:01:03.070788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.484 [2024-12-09 23:01:03.109760] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.484 [2024-12-09 23:01:03.109832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.484 [2024-12-09 23:01:03.109853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.484 [2024-12-09 23:01:03.109862] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.484 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.484 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.484 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.484 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.484 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.484 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.485 "name": "raid_bdev1", 00:19:47.485 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:47.485 "strip_size_kb": 0, 00:19:47.485 "state": "online", 00:19:47.485 "raid_level": "raid1", 00:19:47.485 "superblock": true, 00:19:47.485 "num_base_bdevs": 2, 00:19:47.485 "num_base_bdevs_discovered": 1, 00:19:47.485 "num_base_bdevs_operational": 1, 00:19:47.485 "base_bdevs_list": [ 00:19:47.485 { 00:19:47.485 "name": null, 00:19:47.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.485 "is_configured": false, 00:19:47.485 "data_offset": 0, 00:19:47.485 "data_size": 63488 00:19:47.485 }, 00:19:47.485 { 00:19:47.485 "name": "BaseBdev2", 00:19:47.485 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:47.485 "is_configured": true, 00:19:47.485 "data_offset": 2048, 00:19:47.485 "data_size": 63488 00:19:47.485 } 00:19:47.485 ] 00:19:47.485 }' 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.485 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.051 "name": "raid_bdev1", 00:19:48.051 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:48.051 "strip_size_kb": 0, 00:19:48.051 "state": "online", 00:19:48.051 "raid_level": "raid1", 00:19:48.051 "superblock": true, 00:19:48.051 "num_base_bdevs": 2, 00:19:48.051 "num_base_bdevs_discovered": 1, 00:19:48.051 "num_base_bdevs_operational": 1, 00:19:48.051 "base_bdevs_list": [ 00:19:48.051 { 00:19:48.051 "name": null, 00:19:48.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.051 "is_configured": false, 00:19:48.051 "data_offset": 0, 00:19:48.051 "data_size": 63488 00:19:48.051 }, 00:19:48.051 { 00:19:48.051 "name": "BaseBdev2", 00:19:48.051 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:48.051 "is_configured": true, 00:19:48.051 "data_offset": 2048, 00:19:48.051 "data_size": 63488 00:19:48.051 } 00:19:48.051 ] 00:19:48.051 }' 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.051 [2024-12-09 23:01:03.793322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:48.051 [2024-12-09 23:01:03.793508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.051 [2024-12-09 23:01:03.793540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:48.051 [2024-12-09 23:01:03.793564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.051 [2024-12-09 23:01:03.794080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.051 [2024-12-09 23:01:03.794102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:48.051 [2024-12-09 23:01:03.794196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:48.051 [2024-12-09 23:01:03.794213] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:48.051 [2024-12-09 23:01:03.794227] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:48.051 [2024-12-09 23:01:03.794239] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:48.051 BaseBdev1 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.051 23:01:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.985 23:01:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.243 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.243 "name": "raid_bdev1", 00:19:49.243 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:49.243 "strip_size_kb": 0, 00:19:49.243 "state": "online", 00:19:49.243 "raid_level": "raid1", 00:19:49.243 "superblock": true, 00:19:49.243 "num_base_bdevs": 2, 00:19:49.243 "num_base_bdevs_discovered": 1, 00:19:49.243 "num_base_bdevs_operational": 1, 00:19:49.243 "base_bdevs_list": [ 00:19:49.243 { 00:19:49.243 "name": null, 00:19:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.243 "is_configured": false, 00:19:49.243 "data_offset": 0, 00:19:49.243 "data_size": 63488 00:19:49.243 }, 00:19:49.243 { 00:19:49.243 "name": "BaseBdev2", 00:19:49.243 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:49.243 "is_configured": true, 00:19:49.243 "data_offset": 2048, 00:19:49.243 "data_size": 63488 00:19:49.243 } 00:19:49.243 ] 00:19:49.243 }' 00:19:49.243 23:01:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.243 23:01:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.500 "name": "raid_bdev1", 00:19:49.500 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:49.500 "strip_size_kb": 0, 00:19:49.500 "state": "online", 00:19:49.500 "raid_level": "raid1", 00:19:49.500 "superblock": true, 00:19:49.500 "num_base_bdevs": 2, 00:19:49.500 "num_base_bdevs_discovered": 1, 00:19:49.500 "num_base_bdevs_operational": 1, 00:19:49.500 "base_bdevs_list": [ 00:19:49.500 { 00:19:49.500 "name": null, 00:19:49.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.500 "is_configured": false, 00:19:49.500 "data_offset": 0, 00:19:49.500 "data_size": 63488 00:19:49.500 }, 00:19:49.500 { 00:19:49.500 "name": "BaseBdev2", 00:19:49.500 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:49.500 "is_configured": true, 00:19:49.500 "data_offset": 2048, 00:19:49.500 "data_size": 63488 00:19:49.500 } 00:19:49.500 ] 00:19:49.500 }' 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.500 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.760 [2024-12-09 23:01:05.406801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.760 [2024-12-09 23:01:05.407118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:49.760 [2024-12-09 23:01:05.407199] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:49.760 request: 00:19:49.760 { 00:19:49.760 "base_bdev": "BaseBdev1", 00:19:49.760 "raid_bdev": "raid_bdev1", 00:19:49.760 "method": "bdev_raid_add_base_bdev", 00:19:49.760 "req_id": 1 00:19:49.760 } 00:19:49.760 Got JSON-RPC error response 00:19:49.760 response: 00:19:49.760 { 00:19:49.760 "code": -22, 00:19:49.760 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:49.760 } 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.760 23:01:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.696 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.696 "name": "raid_bdev1", 00:19:50.696 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:50.696 "strip_size_kb": 0, 00:19:50.696 "state": "online", 00:19:50.696 "raid_level": "raid1", 00:19:50.696 "superblock": true, 00:19:50.696 "num_base_bdevs": 2, 00:19:50.696 "num_base_bdevs_discovered": 1, 00:19:50.696 "num_base_bdevs_operational": 1, 00:19:50.696 "base_bdevs_list": [ 00:19:50.696 { 00:19:50.696 "name": null, 00:19:50.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.696 "is_configured": false, 00:19:50.696 "data_offset": 0, 00:19:50.696 "data_size": 63488 00:19:50.696 }, 00:19:50.696 { 00:19:50.696 "name": "BaseBdev2", 00:19:50.696 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:50.696 "is_configured": true, 00:19:50.697 "data_offset": 2048, 00:19:50.697 "data_size": 63488 00:19:50.697 } 00:19:50.697 ] 00:19:50.697 }' 00:19:50.697 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.697 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.278 "name": "raid_bdev1", 00:19:51.278 "uuid": "d3990aee-c5a2-4ec1-bae0-c5a848893c1d", 00:19:51.278 "strip_size_kb": 0, 00:19:51.278 "state": "online", 00:19:51.278 "raid_level": "raid1", 00:19:51.278 "superblock": true, 00:19:51.278 "num_base_bdevs": 2, 00:19:51.278 "num_base_bdevs_discovered": 1, 00:19:51.278 "num_base_bdevs_operational": 1, 00:19:51.278 "base_bdevs_list": [ 00:19:51.278 { 00:19:51.278 "name": null, 00:19:51.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.278 "is_configured": false, 00:19:51.278 "data_offset": 0, 00:19:51.278 "data_size": 63488 00:19:51.278 }, 00:19:51.278 { 00:19:51.278 "name": "BaseBdev2", 00:19:51.278 "uuid": "0059fecc-e96c-57f2-95ca-67cd51e983db", 00:19:51.278 "is_configured": true, 00:19:51.278 "data_offset": 2048, 00:19:51.278 "data_size": 63488 00:19:51.278 } 00:19:51.278 ] 00:19:51.278 }' 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.278 23:01:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76353 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76353 ']' 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76353 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76353 00:19:51.278 killing process with pid 76353 00:19:51.278 Received shutdown signal, test time was about 60.000000 seconds 00:19:51.278 00:19:51.278 Latency(us) 00:19:51.278 [2024-12-09T23:01:07.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.278 [2024-12-09T23:01:07.134Z] =================================================================================================================== 00:19:51.278 [2024-12-09T23:01:07.134Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76353' 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76353 00:19:51.278 [2024-12-09 23:01:07.093808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.278 [2024-12-09 23:01:07.093941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.278 23:01:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76353 00:19:51.278 [2024-12-09 23:01:07.093997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.278 [2024-12-09 23:01:07.094013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:51.847 [2024-12-09 23:01:07.407387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:53.227 00:19:53.227 real 0m24.884s 00:19:53.227 user 0m30.132s 00:19:53.227 sys 0m4.247s 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.227 ************************************ 00:19:53.227 END TEST raid_rebuild_test_sb 00:19:53.227 ************************************ 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 23:01:08 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:53.227 23:01:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:53.227 23:01:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.227 23:01:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.227 ************************************ 00:19:53.227 START TEST raid_rebuild_test_io 00:19:53.227 ************************************ 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:53.227 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77094 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77094 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 77094 ']' 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.228 23:01:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.228 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:53.228 Zero copy mechanism will not be used. 00:19:53.228 [2024-12-09 23:01:08.965241] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:19:53.228 [2024-12-09 23:01:08.965384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77094 ] 00:19:53.487 [2024-12-09 23:01:09.131845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.487 [2024-12-09 23:01:09.268830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.747 [2024-12-09 23:01:09.511876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.747 [2024-12-09 23:01:09.511925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.015 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.015 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:54.015 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:54.015 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:54.015 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.015 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 BaseBdev1_malloc 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 [2024-12-09 23:01:09.897674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:54.275 [2024-12-09 23:01:09.897776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.275 [2024-12-09 23:01:09.897807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:54.275 [2024-12-09 23:01:09.897821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.275 [2024-12-09 23:01:09.900341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.275 [2024-12-09 23:01:09.900399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.275 BaseBdev1 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 BaseBdev2_malloc 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 [2024-12-09 23:01:09.961318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:54.275 [2024-12-09 23:01:09.961511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.275 [2024-12-09 23:01:09.961539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:54.275 [2024-12-09 23:01:09.961555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.275 [2024-12-09 23:01:09.963961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.275 [2024-12-09 23:01:09.964005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:54.275 BaseBdev2 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 spare_malloc 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 spare_delay 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 [2024-12-09 23:01:10.048101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:54.275 [2024-12-09 23:01:10.048185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.275 [2024-12-09 23:01:10.048210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:54.275 [2024-12-09 23:01:10.048223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.275 [2024-12-09 23:01:10.050593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.275 [2024-12-09 23:01:10.050633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:54.275 spare 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 [2024-12-09 23:01:10.060142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.275 [2024-12-09 23:01:10.062229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.275 [2024-12-09 23:01:10.062343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:54.275 [2024-12-09 23:01:10.062359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:54.275 [2024-12-09 23:01:10.062644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:54.275 [2024-12-09 23:01:10.062840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:54.275 [2024-12-09 23:01:10.062854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:54.275 [2024-12-09 23:01:10.063039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.275 "name": "raid_bdev1", 00:19:54.275 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:54.275 "strip_size_kb": 0, 00:19:54.275 "state": "online", 00:19:54.275 "raid_level": "raid1", 00:19:54.275 "superblock": false, 00:19:54.275 "num_base_bdevs": 2, 00:19:54.275 "num_base_bdevs_discovered": 2, 00:19:54.275 "num_base_bdevs_operational": 2, 00:19:54.275 "base_bdevs_list": [ 00:19:54.275 { 00:19:54.275 "name": "BaseBdev1", 00:19:54.275 "uuid": "33d0921a-abb2-503f-8ccd-09b92a42f245", 00:19:54.275 "is_configured": true, 00:19:54.275 "data_offset": 0, 00:19:54.275 "data_size": 65536 00:19:54.275 }, 00:19:54.275 { 00:19:54.275 "name": "BaseBdev2", 00:19:54.275 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:54.275 "is_configured": true, 00:19:54.275 "data_offset": 0, 00:19:54.275 "data_size": 65536 00:19:54.275 } 00:19:54.275 ] 00:19:54.275 }' 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.275 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:54.841 [2024-12-09 23:01:10.515748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 [2024-12-09 23:01:10.619214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.841 "name": "raid_bdev1", 00:19:54.841 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:54.841 "strip_size_kb": 0, 00:19:54.841 "state": "online", 00:19:54.841 "raid_level": "raid1", 00:19:54.841 "superblock": false, 00:19:54.841 "num_base_bdevs": 2, 00:19:54.841 "num_base_bdevs_discovered": 1, 00:19:54.841 "num_base_bdevs_operational": 1, 00:19:54.841 "base_bdevs_list": [ 00:19:54.841 { 00:19:54.841 "name": null, 00:19:54.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.841 "is_configured": false, 00:19:54.841 "data_offset": 0, 00:19:54.841 "data_size": 65536 00:19:54.841 }, 00:19:54.841 { 00:19:54.841 "name": "BaseBdev2", 00:19:54.841 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:54.841 "is_configured": true, 00:19:54.841 "data_offset": 0, 00:19:54.841 "data_size": 65536 00:19:54.841 } 00:19:54.841 ] 00:19:54.841 }' 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.841 23:01:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.098 [2024-12-09 23:01:10.724428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:55.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:55.098 Zero copy mechanism will not be used. 00:19:55.098 Running I/O for 60 seconds... 00:19:55.365 23:01:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:55.365 23:01:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.365 23:01:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.365 [2024-12-09 23:01:11.096245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.365 23:01:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.365 23:01:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:55.365 [2024-12-09 23:01:11.174921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:55.365 [2024-12-09 23:01:11.177302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.655 [2024-12-09 23:01:11.280068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:55.655 [2024-12-09 23:01:11.280874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:55.655 [2024-12-09 23:01:11.508650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:55.655 [2024-12-09 23:01:11.509110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:56.174 182.00 IOPS, 546.00 MiB/s [2024-12-09T23:01:12.030Z] [2024-12-09 23:01:11.869310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:56.433 [2024-12-09 23:01:12.087925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:56.433 [2024-12-09 23:01:12.088309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.433 "name": "raid_bdev1", 00:19:56.433 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:56.433 "strip_size_kb": 0, 00:19:56.433 "state": "online", 00:19:56.433 "raid_level": "raid1", 00:19:56.433 "superblock": false, 00:19:56.433 "num_base_bdevs": 2, 00:19:56.433 "num_base_bdevs_discovered": 2, 00:19:56.433 "num_base_bdevs_operational": 2, 00:19:56.433 "process": { 00:19:56.433 "type": "rebuild", 00:19:56.433 "target": "spare", 00:19:56.433 "progress": { 00:19:56.433 "blocks": 10240, 00:19:56.433 "percent": 15 00:19:56.433 } 00:19:56.433 }, 00:19:56.433 "base_bdevs_list": [ 00:19:56.433 { 00:19:56.433 "name": "spare", 00:19:56.433 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:19:56.433 "is_configured": true, 00:19:56.433 "data_offset": 0, 00:19:56.433 "data_size": 65536 00:19:56.433 }, 00:19:56.433 { 00:19:56.433 "name": "BaseBdev2", 00:19:56.433 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:56.433 "is_configured": true, 00:19:56.433 "data_offset": 0, 00:19:56.433 "data_size": 65536 00:19:56.433 } 00:19:56.433 ] 00:19:56.433 }' 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.433 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.693 [2024-12-09 23:01:12.334956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.693 [2024-12-09 23:01:12.410570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:56.693 [2024-12-09 23:01:12.424220] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.693 [2024-12-09 23:01:12.437928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.693 [2024-12-09 23:01:12.438076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.693 [2024-12-09 23:01:12.438098] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.693 [2024-12-09 23:01:12.481303] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.693 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.951 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.951 "name": "raid_bdev1", 00:19:56.951 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:56.952 "strip_size_kb": 0, 00:19:56.952 "state": "online", 00:19:56.952 "raid_level": "raid1", 00:19:56.952 "superblock": false, 00:19:56.952 "num_base_bdevs": 2, 00:19:56.952 "num_base_bdevs_discovered": 1, 00:19:56.952 "num_base_bdevs_operational": 1, 00:19:56.952 "base_bdevs_list": [ 00:19:56.952 { 00:19:56.952 "name": null, 00:19:56.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.952 "is_configured": false, 00:19:56.952 "data_offset": 0, 00:19:56.952 "data_size": 65536 00:19:56.952 }, 00:19:56.952 { 00:19:56.952 "name": "BaseBdev2", 00:19:56.952 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:56.952 "is_configured": true, 00:19:56.952 "data_offset": 0, 00:19:56.952 "data_size": 65536 00:19:56.952 } 00:19:56.952 ] 00:19:56.952 }' 00:19:56.952 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.952 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.211 141.50 IOPS, 424.50 MiB/s [2024-12-09T23:01:13.067Z] 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.211 "name": "raid_bdev1", 00:19:57.211 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:57.211 "strip_size_kb": 0, 00:19:57.211 "state": "online", 00:19:57.211 "raid_level": "raid1", 00:19:57.211 "superblock": false, 00:19:57.211 "num_base_bdevs": 2, 00:19:57.211 "num_base_bdevs_discovered": 1, 00:19:57.211 "num_base_bdevs_operational": 1, 00:19:57.211 "base_bdevs_list": [ 00:19:57.211 { 00:19:57.211 "name": null, 00:19:57.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.211 "is_configured": false, 00:19:57.211 "data_offset": 0, 00:19:57.211 "data_size": 65536 00:19:57.211 }, 00:19:57.211 { 00:19:57.211 "name": "BaseBdev2", 00:19:57.211 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:57.211 "is_configured": true, 00:19:57.211 "data_offset": 0, 00:19:57.211 "data_size": 65536 00:19:57.211 } 00:19:57.211 ] 00:19:57.211 }' 00:19:57.211 23:01:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.211 23:01:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.211 23:01:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.470 23:01:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.470 23:01:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:57.470 23:01:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.470 23:01:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.470 [2024-12-09 23:01:13.079323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.470 23:01:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.470 23:01:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:57.470 [2024-12-09 23:01:13.124802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:57.470 [2024-12-09 23:01:13.126900] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.470 [2024-12-09 23:01:13.253183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:57.470 [2024-12-09 23:01:13.253935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:57.729 [2024-12-09 23:01:13.468513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:57.729 [2024-12-09 23:01:13.468960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:57.989 [2024-12-09 23:01:13.695156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:57.989 165.00 IOPS, 495.00 MiB/s [2024-12-09T23:01:13.845Z] [2024-12-09 23:01:13.801984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.557 [2024-12-09 23:01:14.119309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:58.557 [2024-12-09 23:01:14.119876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.557 "name": "raid_bdev1", 00:19:58.557 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:58.557 "strip_size_kb": 0, 00:19:58.557 "state": "online", 00:19:58.557 "raid_level": "raid1", 00:19:58.557 "superblock": false, 00:19:58.557 "num_base_bdevs": 2, 00:19:58.557 "num_base_bdevs_discovered": 2, 00:19:58.557 "num_base_bdevs_operational": 2, 00:19:58.557 "process": { 00:19:58.557 "type": "rebuild", 00:19:58.557 "target": "spare", 00:19:58.557 "progress": { 00:19:58.557 "blocks": 14336, 00:19:58.557 "percent": 21 00:19:58.557 } 00:19:58.557 }, 00:19:58.557 "base_bdevs_list": [ 00:19:58.557 { 00:19:58.557 "name": "spare", 00:19:58.557 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:19:58.557 "is_configured": true, 00:19:58.557 "data_offset": 0, 00:19:58.557 "data_size": 65536 00:19:58.557 }, 00:19:58.557 { 00:19:58.557 "name": "BaseBdev2", 00:19:58.557 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:58.557 "is_configured": true, 00:19:58.557 "data_offset": 0, 00:19:58.557 "data_size": 65536 00:19:58.557 } 00:19:58.557 ] 00:19:58.557 }' 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:58.557 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=432 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.558 [2024-12-09 23:01:14.248204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.558 "name": "raid_bdev1", 00:19:58.558 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:58.558 "strip_size_kb": 0, 00:19:58.558 "state": "online", 00:19:58.558 "raid_level": "raid1", 00:19:58.558 "superblock": false, 00:19:58.558 "num_base_bdevs": 2, 00:19:58.558 "num_base_bdevs_discovered": 2, 00:19:58.558 "num_base_bdevs_operational": 2, 00:19:58.558 "process": { 00:19:58.558 "type": "rebuild", 00:19:58.558 "target": "spare", 00:19:58.558 "progress": { 00:19:58.558 "blocks": 16384, 00:19:58.558 "percent": 25 00:19:58.558 } 00:19:58.558 }, 00:19:58.558 "base_bdevs_list": [ 00:19:58.558 { 00:19:58.558 "name": "spare", 00:19:58.558 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:19:58.558 "is_configured": true, 00:19:58.558 "data_offset": 0, 00:19:58.558 "data_size": 65536 00:19:58.558 }, 00:19:58.558 { 00:19:58.558 "name": "BaseBdev2", 00:19:58.558 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:58.558 "is_configured": true, 00:19:58.558 "data_offset": 0, 00:19:58.558 "data_size": 65536 00:19:58.558 } 00:19:58.558 ] 00:19:58.558 }' 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.558 23:01:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:58.817 [2024-12-09 23:01:14.591053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:59.335 143.75 IOPS, 431.25 MiB/s [2024-12-09T23:01:15.191Z] [2024-12-09 23:01:15.055722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:59.595 [2024-12-09 23:01:15.360008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:59.595 [2024-12-09 23:01:15.360706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.595 23:01:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.887 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.887 "name": "raid_bdev1", 00:19:59.887 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:19:59.887 "strip_size_kb": 0, 00:19:59.887 "state": "online", 00:19:59.887 "raid_level": "raid1", 00:19:59.887 "superblock": false, 00:19:59.887 "num_base_bdevs": 2, 00:19:59.887 "num_base_bdevs_discovered": 2, 00:19:59.887 "num_base_bdevs_operational": 2, 00:19:59.887 "process": { 00:19:59.887 "type": "rebuild", 00:19:59.887 "target": "spare", 00:19:59.887 "progress": { 00:19:59.887 "blocks": 32768, 00:19:59.887 "percent": 50 00:19:59.887 } 00:19:59.887 }, 00:19:59.887 "base_bdevs_list": [ 00:19:59.887 { 00:19:59.887 "name": "spare", 00:19:59.887 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:19:59.887 "is_configured": true, 00:19:59.887 "data_offset": 0, 00:19:59.887 "data_size": 65536 00:19:59.887 }, 00:19:59.887 { 00:19:59.887 "name": "BaseBdev2", 00:19:59.887 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:19:59.887 "is_configured": true, 00:19:59.887 "data_offset": 0, 00:19:59.887 "data_size": 65536 00:19:59.887 } 00:19:59.887 ] 00:19:59.887 }' 00:19:59.887 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.887 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.887 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.887 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.887 23:01:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.887 [2024-12-09 23:01:15.562367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:00.824 126.80 IOPS, 380.40 MiB/s [2024-12-09T23:01:16.680Z] [2024-12-09 23:01:16.429409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:00.824 [2024-12-09 23:01:16.545893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.824 "name": "raid_bdev1", 00:20:00.824 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:20:00.824 "strip_size_kb": 0, 00:20:00.824 "state": "online", 00:20:00.824 "raid_level": "raid1", 00:20:00.824 "superblock": false, 00:20:00.824 "num_base_bdevs": 2, 00:20:00.824 "num_base_bdevs_discovered": 2, 00:20:00.824 "num_base_bdevs_operational": 2, 00:20:00.824 "process": { 00:20:00.824 "type": "rebuild", 00:20:00.824 "target": "spare", 00:20:00.824 "progress": { 00:20:00.824 "blocks": 53248, 00:20:00.824 "percent": 81 00:20:00.824 } 00:20:00.824 }, 00:20:00.824 "base_bdevs_list": [ 00:20:00.824 { 00:20:00.824 "name": "spare", 00:20:00.824 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:20:00.824 "is_configured": true, 00:20:00.824 "data_offset": 0, 00:20:00.824 "data_size": 65536 00:20:00.824 }, 00:20:00.824 { 00:20:00.824 "name": "BaseBdev2", 00:20:00.824 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:20:00.824 "is_configured": true, 00:20:00.824 "data_offset": 0, 00:20:00.824 "data_size": 65536 00:20:00.824 } 00:20:00.824 ] 00:20:00.824 }' 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.824 23:01:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:01.651 115.00 IOPS, 345.00 MiB/s [2024-12-09T23:01:17.507Z] [2024-12-09 23:01:17.214935] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:01.651 [2024-12-09 23:01:17.320624] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:01.651 [2024-12-09 23:01:17.323115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.918 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.918 103.57 IOPS, 310.71 MiB/s [2024-12-09T23:01:17.774Z] 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.918 "name": "raid_bdev1", 00:20:01.918 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:20:01.918 "strip_size_kb": 0, 00:20:01.918 "state": "online", 00:20:01.918 "raid_level": "raid1", 00:20:01.918 "superblock": false, 00:20:01.918 "num_base_bdevs": 2, 00:20:01.918 "num_base_bdevs_discovered": 2, 00:20:01.918 "num_base_bdevs_operational": 2, 00:20:01.918 "base_bdevs_list": [ 00:20:01.918 { 00:20:01.918 "name": "spare", 00:20:01.919 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:20:01.919 "is_configured": true, 00:20:01.919 "data_offset": 0, 00:20:01.919 "data_size": 65536 00:20:01.919 }, 00:20:01.919 { 00:20:01.919 "name": "BaseBdev2", 00:20:01.919 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:20:01.919 "is_configured": true, 00:20:01.919 "data_offset": 0, 00:20:01.919 "data_size": 65536 00:20:01.919 } 00:20:01.919 ] 00:20:01.919 }' 00:20:01.919 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.179 "name": "raid_bdev1", 00:20:02.179 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:20:02.179 "strip_size_kb": 0, 00:20:02.179 "state": "online", 00:20:02.179 "raid_level": "raid1", 00:20:02.179 "superblock": false, 00:20:02.179 "num_base_bdevs": 2, 00:20:02.179 "num_base_bdevs_discovered": 2, 00:20:02.179 "num_base_bdevs_operational": 2, 00:20:02.179 "base_bdevs_list": [ 00:20:02.179 { 00:20:02.179 "name": "spare", 00:20:02.179 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:20:02.179 "is_configured": true, 00:20:02.179 "data_offset": 0, 00:20:02.179 "data_size": 65536 00:20:02.179 }, 00:20:02.179 { 00:20:02.179 "name": "BaseBdev2", 00:20:02.179 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:20:02.179 "is_configured": true, 00:20:02.179 "data_offset": 0, 00:20:02.179 "data_size": 65536 00:20:02.179 } 00:20:02.179 ] 00:20:02.179 }' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.179 23:01:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.179 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.179 "name": "raid_bdev1", 00:20:02.179 "uuid": "8e1e3674-b39a-4a5b-a136-a4d8dcb4fa77", 00:20:02.179 "strip_size_kb": 0, 00:20:02.179 "state": "online", 00:20:02.179 "raid_level": "raid1", 00:20:02.179 "superblock": false, 00:20:02.179 "num_base_bdevs": 2, 00:20:02.179 "num_base_bdevs_discovered": 2, 00:20:02.179 "num_base_bdevs_operational": 2, 00:20:02.179 "base_bdevs_list": [ 00:20:02.179 { 00:20:02.179 "name": "spare", 00:20:02.179 "uuid": "cb10da77-536d-5e9f-aa00-cd7e024cb852", 00:20:02.179 "is_configured": true, 00:20:02.179 "data_offset": 0, 00:20:02.179 "data_size": 65536 00:20:02.179 }, 00:20:02.179 { 00:20:02.179 "name": "BaseBdev2", 00:20:02.179 "uuid": "b71feb22-7e6c-5ba7-b22d-945723663e39", 00:20:02.179 "is_configured": true, 00:20:02.179 "data_offset": 0, 00:20:02.179 "data_size": 65536 00:20:02.179 } 00:20:02.179 ] 00:20:02.179 }' 00:20:02.179 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.179 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.780 [2024-12-09 23:01:18.454386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.780 [2024-12-09 23:01:18.454425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.780 00:20:02.780 Latency(us) 00:20:02.780 [2024-12-09T23:01:18.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.780 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:02.780 raid_bdev1 : 7.80 95.92 287.75 0.00 0.00 14592.88 327.32 109894.43 00:20:02.780 [2024-12-09T23:01:18.636Z] =================================================================================================================== 00:20:02.780 [2024-12-09T23:01:18.636Z] Total : 95.92 287.75 0.00 0.00 14592.88 327.32 109894.43 00:20:02.780 [2024-12-09 23:01:18.533561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.780 [2024-12-09 23:01:18.533637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.780 [2024-12-09 23:01:18.533730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.780 [2024-12-09 23:01:18.533744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:02.780 { 00:20:02.780 "results": [ 00:20:02.780 { 00:20:02.780 "job": "raid_bdev1", 00:20:02.780 "core_mask": "0x1", 00:20:02.780 "workload": "randrw", 00:20:02.780 "percentage": 50, 00:20:02.780 "status": "finished", 00:20:02.780 "queue_depth": 2, 00:20:02.780 "io_size": 3145728, 00:20:02.780 "runtime": 7.798376, 00:20:02.780 "iops": 95.91740639333112, 00:20:02.780 "mibps": 287.75221917999335, 00:20:02.780 "io_failed": 0, 00:20:02.780 "io_timeout": 0, 00:20:02.780 "avg_latency_us": 14592.882366952339, 00:20:02.780 "min_latency_us": 327.32227074235806, 00:20:02.780 "max_latency_us": 109894.42794759825 00:20:02.780 } 00:20:02.780 ], 00:20:02.780 "core_count": 1 00:20:02.780 } 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:02.780 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:03.041 /dev/nbd0 00:20:03.041 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.299 1+0 records in 00:20:03.299 1+0 records out 00:20:03.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545692 s, 7.5 MB/s 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.299 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.300 23:01:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:03.300 /dev/nbd1 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.561 1+0 records in 00:20:03.561 1+0 records out 00:20:03.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290266 s, 14.1 MB/s 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.561 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.821 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:04.080 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77094 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 77094 ']' 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 77094 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.081 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77094 00:20:04.340 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.340 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.340 killing process with pid 77094 00:20:04.340 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77094' 00:20:04.340 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 77094 00:20:04.340 Received shutdown signal, test time was about 9.253602 seconds 00:20:04.340 00:20:04.340 Latency(us) 00:20:04.340 [2024-12-09T23:01:20.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.340 [2024-12-09T23:01:20.196Z] =================================================================================================================== 00:20:04.340 [2024-12-09T23:01:20.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.340 [2024-12-09 23:01:19.962664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.340 23:01:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 77094 00:20:04.603 [2024-12-09 23:01:20.199793] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.018 23:01:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:06.018 00:20:06.018 real 0m12.578s 00:20:06.018 user 0m15.865s 00:20:06.018 sys 0m1.640s 00:20:06.018 23:01:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.018 23:01:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 ************************************ 00:20:06.018 END TEST raid_rebuild_test_io 00:20:06.018 ************************************ 00:20:06.018 23:01:21 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:20:06.018 23:01:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:06.018 23:01:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.018 23:01:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 ************************************ 00:20:06.018 START TEST raid_rebuild_test_sb_io 00:20:06.018 ************************************ 00:20:06.018 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:20:06.018 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77470 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77470 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77470 ']' 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.019 23:01:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.019 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:06.019 Zero copy mechanism will not be used. 00:20:06.019 [2024-12-09 23:01:21.599706] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:20:06.019 [2024-12-09 23:01:21.599818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77470 ] 00:20:06.019 [2024-12-09 23:01:21.774671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.292 [2024-12-09 23:01:21.896820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.292 [2024-12-09 23:01:22.097216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.292 [2024-12-09 23:01:22.097269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 BaseBdev1_malloc 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 [2024-12-09 23:01:22.561118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:06.867 [2024-12-09 23:01:22.561193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.867 [2024-12-09 23:01:22.561218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:06.867 [2024-12-09 23:01:22.561233] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.867 [2024-12-09 23:01:22.563641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.867 [2024-12-09 23:01:22.563686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:06.867 BaseBdev1 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 BaseBdev2_malloc 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 [2024-12-09 23:01:22.618376] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:06.867 [2024-12-09 23:01:22.618444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.867 [2024-12-09 23:01:22.618476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:06.867 [2024-12-09 23:01:22.618488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.867 [2024-12-09 23:01:22.620704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.867 [2024-12-09 23:01:22.620746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:06.867 BaseBdev2 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 spare_malloc 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 spare_delay 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 [2024-12-09 23:01:22.702516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:06.867 [2024-12-09 23:01:22.702605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.867 [2024-12-09 23:01:22.702630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:06.867 [2024-12-09 23:01:22.702642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.867 [2024-12-09 23:01:22.705135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.867 [2024-12-09 23:01:22.705184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:06.867 spare 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 [2024-12-09 23:01:22.714501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.867 [2024-12-09 23:01:22.716292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.867 [2024-12-09 23:01:22.716507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:06.867 [2024-12-09 23:01:22.716533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.867 [2024-12-09 23:01:22.716829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:06.867 [2024-12-09 23:01:22.717030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:06.867 [2024-12-09 23:01:22.717048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:06.867 [2024-12-09 23:01:22.717233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.867 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.126 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.127 "name": "raid_bdev1", 00:20:07.127 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:07.127 "strip_size_kb": 0, 00:20:07.127 "state": "online", 00:20:07.127 "raid_level": "raid1", 00:20:07.127 "superblock": true, 00:20:07.127 "num_base_bdevs": 2, 00:20:07.127 "num_base_bdevs_discovered": 2, 00:20:07.127 "num_base_bdevs_operational": 2, 00:20:07.127 "base_bdevs_list": [ 00:20:07.127 { 00:20:07.127 "name": "BaseBdev1", 00:20:07.127 "uuid": "aca9d91c-d349-5130-9272-b4ee364351bd", 00:20:07.127 "is_configured": true, 00:20:07.127 "data_offset": 2048, 00:20:07.127 "data_size": 63488 00:20:07.127 }, 00:20:07.127 { 00:20:07.127 "name": "BaseBdev2", 00:20:07.127 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:07.127 "is_configured": true, 00:20:07.127 "data_offset": 2048, 00:20:07.127 "data_size": 63488 00:20:07.127 } 00:20:07.127 ] 00:20:07.127 }' 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.127 23:01:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:07.386 [2024-12-09 23:01:23.173972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.386 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.645 [2024-12-09 23:01:23.249559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.645 "name": "raid_bdev1", 00:20:07.645 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:07.645 "strip_size_kb": 0, 00:20:07.645 "state": "online", 00:20:07.645 "raid_level": "raid1", 00:20:07.645 "superblock": true, 00:20:07.645 "num_base_bdevs": 2, 00:20:07.645 "num_base_bdevs_discovered": 1, 00:20:07.645 "num_base_bdevs_operational": 1, 00:20:07.645 "base_bdevs_list": [ 00:20:07.645 { 00:20:07.645 "name": null, 00:20:07.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.645 "is_configured": false, 00:20:07.645 "data_offset": 0, 00:20:07.645 "data_size": 63488 00:20:07.645 }, 00:20:07.645 { 00:20:07.645 "name": "BaseBdev2", 00:20:07.645 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:07.645 "is_configured": true, 00:20:07.645 "data_offset": 2048, 00:20:07.645 "data_size": 63488 00:20:07.645 } 00:20:07.645 ] 00:20:07.645 }' 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.645 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.645 [2024-12-09 23:01:23.349326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:07.645 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.645 Zero copy mechanism will not be used. 00:20:07.645 Running I/O for 60 seconds... 00:20:07.903 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:07.903 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.903 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.903 [2024-12-09 23:01:23.729418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.162 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.162 23:01:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:08.162 [2024-12-09 23:01:23.776254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:08.162 [2024-12-09 23:01:23.778353] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.162 [2024-12-09 23:01:23.881002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.162 [2024-12-09 23:01:23.881643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.422 [2024-12-09 23:01:24.105704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.422 [2024-12-09 23:01:24.106046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.682 171.00 IOPS, 513.00 MiB/s [2024-12-09T23:01:24.538Z] [2024-12-09 23:01:24.426145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:08.941 [2024-12-09 23:01:24.635445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.941 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.200 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.200 "name": "raid_bdev1", 00:20:09.200 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:09.200 "strip_size_kb": 0, 00:20:09.200 "state": "online", 00:20:09.200 "raid_level": "raid1", 00:20:09.200 "superblock": true, 00:20:09.200 "num_base_bdevs": 2, 00:20:09.200 "num_base_bdevs_discovered": 2, 00:20:09.200 "num_base_bdevs_operational": 2, 00:20:09.200 "process": { 00:20:09.200 "type": "rebuild", 00:20:09.200 "target": "spare", 00:20:09.200 "progress": { 00:20:09.200 "blocks": 10240, 00:20:09.200 "percent": 16 00:20:09.200 } 00:20:09.200 }, 00:20:09.200 "base_bdevs_list": [ 00:20:09.200 { 00:20:09.200 "name": "spare", 00:20:09.200 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:09.200 "is_configured": true, 00:20:09.200 "data_offset": 2048, 00:20:09.200 "data_size": 63488 00:20:09.200 }, 00:20:09.200 { 00:20:09.200 "name": "BaseBdev2", 00:20:09.200 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:09.200 "is_configured": true, 00:20:09.200 "data_offset": 2048, 00:20:09.200 "data_size": 63488 00:20:09.200 } 00:20:09.200 ] 00:20:09.200 }' 00:20:09.200 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.201 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.201 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.201 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.201 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:09.201 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.201 23:01:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.201 [2024-12-09 23:01:24.923961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.201 [2024-12-09 23:01:24.953560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:09.201 [2024-12-09 23:01:25.054707] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:09.460 [2024-12-09 23:01:25.057898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.460 [2024-12-09 23:01:25.057968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.460 [2024-12-09 23:01:25.057981] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:09.460 [2024-12-09 23:01:25.110169] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.460 "name": "raid_bdev1", 00:20:09.460 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:09.460 "strip_size_kb": 0, 00:20:09.460 "state": "online", 00:20:09.460 "raid_level": "raid1", 00:20:09.460 "superblock": true, 00:20:09.460 "num_base_bdevs": 2, 00:20:09.460 "num_base_bdevs_discovered": 1, 00:20:09.460 "num_base_bdevs_operational": 1, 00:20:09.460 "base_bdevs_list": [ 00:20:09.460 { 00:20:09.460 "name": null, 00:20:09.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.460 "is_configured": false, 00:20:09.460 "data_offset": 0, 00:20:09.460 "data_size": 63488 00:20:09.460 }, 00:20:09.460 { 00:20:09.460 "name": "BaseBdev2", 00:20:09.460 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:09.460 "is_configured": true, 00:20:09.460 "data_offset": 2048, 00:20:09.460 "data_size": 63488 00:20:09.460 } 00:20:09.460 ] 00:20:09.460 }' 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.460 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.978 146.50 IOPS, 439.50 MiB/s [2024-12-09T23:01:25.834Z] 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.978 "name": "raid_bdev1", 00:20:09.978 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:09.978 "strip_size_kb": 0, 00:20:09.978 "state": "online", 00:20:09.978 "raid_level": "raid1", 00:20:09.978 "superblock": true, 00:20:09.978 "num_base_bdevs": 2, 00:20:09.978 "num_base_bdevs_discovered": 1, 00:20:09.978 "num_base_bdevs_operational": 1, 00:20:09.978 "base_bdevs_list": [ 00:20:09.978 { 00:20:09.978 "name": null, 00:20:09.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.978 "is_configured": false, 00:20:09.978 "data_offset": 0, 00:20:09.978 "data_size": 63488 00:20:09.978 }, 00:20:09.978 { 00:20:09.978 "name": "BaseBdev2", 00:20:09.978 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:09.978 "is_configured": true, 00:20:09.978 "data_offset": 2048, 00:20:09.978 "data_size": 63488 00:20:09.978 } 00:20:09.978 ] 00:20:09.978 }' 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.978 [2024-12-09 23:01:25.770095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.978 23:01:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:09.978 [2024-12-09 23:01:25.832153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:10.237 [2024-12-09 23:01:25.834215] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.237 [2024-12-09 23:01:25.953823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:10.237 [2024-12-09 23:01:25.954447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:10.497 [2024-12-09 23:01:26.183314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:10.497 [2024-12-09 23:01:26.183660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:11.064 151.67 IOPS, 455.00 MiB/s [2024-12-09T23:01:26.920Z] [2024-12-09 23:01:26.637794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.064 "name": "raid_bdev1", 00:20:11.064 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:11.064 "strip_size_kb": 0, 00:20:11.064 "state": "online", 00:20:11.064 "raid_level": "raid1", 00:20:11.064 "superblock": true, 00:20:11.064 "num_base_bdevs": 2, 00:20:11.064 "num_base_bdevs_discovered": 2, 00:20:11.064 "num_base_bdevs_operational": 2, 00:20:11.064 "process": { 00:20:11.064 "type": "rebuild", 00:20:11.064 "target": "spare", 00:20:11.064 "progress": { 00:20:11.064 "blocks": 10240, 00:20:11.064 "percent": 16 00:20:11.064 } 00:20:11.064 }, 00:20:11.064 "base_bdevs_list": [ 00:20:11.064 { 00:20:11.064 "name": "spare", 00:20:11.064 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:11.064 "is_configured": true, 00:20:11.064 "data_offset": 2048, 00:20:11.064 "data_size": 63488 00:20:11.064 }, 00:20:11.064 { 00:20:11.064 "name": "BaseBdev2", 00:20:11.064 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:11.064 "is_configured": true, 00:20:11.064 "data_offset": 2048, 00:20:11.064 "data_size": 63488 00:20:11.064 } 00:20:11.064 ] 00:20:11.064 }' 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.064 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:11.323 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=444 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.323 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.324 [2024-12-09 23:01:26.982709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:11.324 [2024-12-09 23:01:26.983286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:11.324 23:01:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.324 23:01:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.324 "name": "raid_bdev1", 00:20:11.324 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:11.324 "strip_size_kb": 0, 00:20:11.324 "state": "online", 00:20:11.324 "raid_level": "raid1", 00:20:11.324 "superblock": true, 00:20:11.324 "num_base_bdevs": 2, 00:20:11.324 "num_base_bdevs_discovered": 2, 00:20:11.324 "num_base_bdevs_operational": 2, 00:20:11.324 "process": { 00:20:11.324 "type": "rebuild", 00:20:11.324 "target": "spare", 00:20:11.324 "progress": { 00:20:11.324 "blocks": 12288, 00:20:11.324 "percent": 19 00:20:11.324 } 00:20:11.324 }, 00:20:11.324 "base_bdevs_list": [ 00:20:11.324 { 00:20:11.324 "name": "spare", 00:20:11.324 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:11.324 "is_configured": true, 00:20:11.324 "data_offset": 2048, 00:20:11.324 "data_size": 63488 00:20:11.324 }, 00:20:11.324 { 00:20:11.324 "name": "BaseBdev2", 00:20:11.324 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:11.324 "is_configured": true, 00:20:11.324 "data_offset": 2048, 00:20:11.324 "data_size": 63488 00:20:11.324 } 00:20:11.324 ] 00:20:11.324 }' 00:20:11.324 23:01:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.324 23:01:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.324 23:01:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.324 23:01:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.324 23:01:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.324 [2024-12-09 23:01:27.098594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:11.583 [2024-12-09 23:01:27.308104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:11.583 [2024-12-09 23:01:27.308744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:11.842 130.75 IOPS, 392.25 MiB/s [2024-12-09T23:01:27.698Z] [2024-12-09 23:01:27.510331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:12.102 [2024-12-09 23:01:27.761739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.362 "name": "raid_bdev1", 00:20:12.362 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:12.362 "strip_size_kb": 0, 00:20:12.362 "state": "online", 00:20:12.362 "raid_level": "raid1", 00:20:12.362 "superblock": true, 00:20:12.362 "num_base_bdevs": 2, 00:20:12.362 "num_base_bdevs_discovered": 2, 00:20:12.362 "num_base_bdevs_operational": 2, 00:20:12.362 "process": { 00:20:12.362 "type": "rebuild", 00:20:12.362 "target": "spare", 00:20:12.362 "progress": { 00:20:12.362 "blocks": 28672, 00:20:12.362 "percent": 45 00:20:12.362 } 00:20:12.362 }, 00:20:12.362 "base_bdevs_list": [ 00:20:12.362 { 00:20:12.362 "name": "spare", 00:20:12.362 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:12.362 "is_configured": true, 00:20:12.362 "data_offset": 2048, 00:20:12.362 "data_size": 63488 00:20:12.362 }, 00:20:12.362 { 00:20:12.362 "name": "BaseBdev2", 00:20:12.362 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:12.362 "is_configured": true, 00:20:12.362 "data_offset": 2048, 00:20:12.362 "data_size": 63488 00:20:12.362 } 00:20:12.362 ] 00:20:12.362 }' 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.362 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.621 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.621 23:01:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:12.621 [2024-12-09 23:01:28.327761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:12.621 [2024-12-09 23:01:28.328091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:12.880 116.20 IOPS, 348.60 MiB/s [2024-12-09T23:01:28.736Z] [2024-12-09 23:01:28.631435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:12.880 [2024-12-09 23:01:28.632074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:13.139 [2024-12-09 23:01:28.861108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.397 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.656 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.656 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.656 "name": "raid_bdev1", 00:20:13.656 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:13.656 "strip_size_kb": 0, 00:20:13.656 "state": "online", 00:20:13.656 "raid_level": "raid1", 00:20:13.656 "superblock": true, 00:20:13.656 "num_base_bdevs": 2, 00:20:13.656 "num_base_bdevs_discovered": 2, 00:20:13.656 "num_base_bdevs_operational": 2, 00:20:13.656 "process": { 00:20:13.656 "type": "rebuild", 00:20:13.656 "target": "spare", 00:20:13.656 "progress": { 00:20:13.656 "blocks": 45056, 00:20:13.656 "percent": 70 00:20:13.656 } 00:20:13.656 }, 00:20:13.656 "base_bdevs_list": [ 00:20:13.656 { 00:20:13.656 "name": "spare", 00:20:13.656 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:13.656 "is_configured": true, 00:20:13.656 "data_offset": 2048, 00:20:13.656 "data_size": 63488 00:20:13.656 }, 00:20:13.656 { 00:20:13.656 "name": "BaseBdev2", 00:20:13.656 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:13.656 "is_configured": true, 00:20:13.656 "data_offset": 2048, 00:20:13.656 "data_size": 63488 00:20:13.656 } 00:20:13.656 ] 00:20:13.656 }' 00:20:13.656 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.656 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.656 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.656 102.50 IOPS, 307.50 MiB/s [2024-12-09T23:01:29.512Z] 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.656 23:01:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.915 [2024-12-09 23:01:29.527212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:13.915 [2024-12-09 23:01:29.527873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:13.915 [2024-12-09 23:01:29.652835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:14.484 [2024-12-09 23:01:30.200769] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:14.484 [2024-12-09 23:01:30.300611] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:14.484 [2024-12-09 23:01:30.303509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.744 93.57 IOPS, 280.71 MiB/s [2024-12-09T23:01:30.600Z] 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.744 "name": "raid_bdev1", 00:20:14.744 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:14.744 "strip_size_kb": 0, 00:20:14.744 "state": "online", 00:20:14.744 "raid_level": "raid1", 00:20:14.744 "superblock": true, 00:20:14.744 "num_base_bdevs": 2, 00:20:14.744 "num_base_bdevs_discovered": 2, 00:20:14.744 "num_base_bdevs_operational": 2, 00:20:14.744 "base_bdevs_list": [ 00:20:14.744 { 00:20:14.744 "name": "spare", 00:20:14.744 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:14.744 "is_configured": true, 00:20:14.744 "data_offset": 2048, 00:20:14.744 "data_size": 63488 00:20:14.744 }, 00:20:14.744 { 00:20:14.744 "name": "BaseBdev2", 00:20:14.744 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:14.744 "is_configured": true, 00:20:14.744 "data_offset": 2048, 00:20:14.744 "data_size": 63488 00:20:14.744 } 00:20:14.744 ] 00:20:14.744 }' 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.744 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.004 "name": "raid_bdev1", 00:20:15.004 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:15.004 "strip_size_kb": 0, 00:20:15.004 "state": "online", 00:20:15.004 "raid_level": "raid1", 00:20:15.004 "superblock": true, 00:20:15.004 "num_base_bdevs": 2, 00:20:15.004 "num_base_bdevs_discovered": 2, 00:20:15.004 "num_base_bdevs_operational": 2, 00:20:15.004 "base_bdevs_list": [ 00:20:15.004 { 00:20:15.004 "name": "spare", 00:20:15.004 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:15.004 "is_configured": true, 00:20:15.004 "data_offset": 2048, 00:20:15.004 "data_size": 63488 00:20:15.004 }, 00:20:15.004 { 00:20:15.004 "name": "BaseBdev2", 00:20:15.004 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:15.004 "is_configured": true, 00:20:15.004 "data_offset": 2048, 00:20:15.004 "data_size": 63488 00:20:15.004 } 00:20:15.004 ] 00:20:15.004 }' 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.004 "name": "raid_bdev1", 00:20:15.004 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:15.004 "strip_size_kb": 0, 00:20:15.004 "state": "online", 00:20:15.004 "raid_level": "raid1", 00:20:15.004 "superblock": true, 00:20:15.004 "num_base_bdevs": 2, 00:20:15.004 "num_base_bdevs_discovered": 2, 00:20:15.004 "num_base_bdevs_operational": 2, 00:20:15.004 "base_bdevs_list": [ 00:20:15.004 { 00:20:15.004 "name": "spare", 00:20:15.004 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:15.004 "is_configured": true, 00:20:15.004 "data_offset": 2048, 00:20:15.004 "data_size": 63488 00:20:15.004 }, 00:20:15.004 { 00:20:15.004 "name": "BaseBdev2", 00:20:15.004 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:15.004 "is_configured": true, 00:20:15.004 "data_offset": 2048, 00:20:15.004 "data_size": 63488 00:20:15.004 } 00:20:15.004 ] 00:20:15.004 }' 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.004 23:01:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.573 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.574 [2024-12-09 23:01:31.193580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.574 [2024-12-09 23:01:31.193628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.574 00:20:15.574 Latency(us) 00:20:15.574 [2024-12-09T23:01:31.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.574 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:15.574 raid_bdev1 : 7.95 86.55 259.65 0.00 0.00 15434.65 336.27 113557.58 00:20:15.574 [2024-12-09T23:01:31.430Z] =================================================================================================================== 00:20:15.574 [2024-12-09T23:01:31.430Z] Total : 86.55 259.65 0.00 0.00 15434.65 336.27 113557.58 00:20:15.574 [2024-12-09 23:01:31.310996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.574 [2024-12-09 23:01:31.311107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.574 [2024-12-09 23:01:31.311197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.574 [2024-12-09 23:01:31.311210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:15.574 { 00:20:15.574 "results": [ 00:20:15.574 { 00:20:15.574 "job": "raid_bdev1", 00:20:15.574 "core_mask": "0x1", 00:20:15.574 "workload": "randrw", 00:20:15.574 "percentage": 50, 00:20:15.574 "status": "finished", 00:20:15.574 "queue_depth": 2, 00:20:15.574 "io_size": 3145728, 00:20:15.574 "runtime": 7.949133, 00:20:15.574 "iops": 86.55031938703252, 00:20:15.574 "mibps": 259.65095816109755, 00:20:15.574 "io_failed": 0, 00:20:15.574 "io_timeout": 0, 00:20:15.574 "avg_latency_us": 15434.653884431806, 00:20:15.574 "min_latency_us": 336.2655021834061, 00:20:15.574 "max_latency_us": 113557.57554585153 00:20:15.574 } 00:20:15.574 ], 00:20:15.574 "core_count": 1 00:20:15.574 } 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:15.574 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:15.833 /dev/nbd0 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:15.833 1+0 records in 00:20:15.833 1+0 records out 00:20:15.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377316 s, 10.9 MB/s 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:15.833 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:16.092 /dev/nbd1 00:20:16.092 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:16.092 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:16.092 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:16.092 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:16.093 1+0 records in 00:20:16.093 1+0 records out 00:20:16.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376633 s, 10.9 MB/s 00:20:16.093 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.352 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:16.352 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.352 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:16.353 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:16.353 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.353 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.353 23:01:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.353 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.612 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.871 [2024-12-09 23:01:32.606080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.871 [2024-12-09 23:01:32.606147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.871 [2024-12-09 23:01:32.606168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:16.871 [2024-12-09 23:01:32.606179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.871 [2024-12-09 23:01:32.608627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.871 [2024-12-09 23:01:32.608711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.871 [2024-12-09 23:01:32.608849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:16.871 [2024-12-09 23:01:32.608934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.871 [2024-12-09 23:01:32.609118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.871 spare 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.871 [2024-12-09 23:01:32.709092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:16.871 [2024-12-09 23:01:32.709242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:16.871 [2024-12-09 23:01:32.709706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:20:16.871 [2024-12-09 23:01:32.709976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:16.871 [2024-12-09 23:01:32.710027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:16.871 [2024-12-09 23:01:32.710339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.871 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.872 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.129 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.129 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.129 "name": "raid_bdev1", 00:20:17.129 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:17.129 "strip_size_kb": 0, 00:20:17.129 "state": "online", 00:20:17.129 "raid_level": "raid1", 00:20:17.129 "superblock": true, 00:20:17.129 "num_base_bdevs": 2, 00:20:17.129 "num_base_bdevs_discovered": 2, 00:20:17.129 "num_base_bdevs_operational": 2, 00:20:17.129 "base_bdevs_list": [ 00:20:17.129 { 00:20:17.129 "name": "spare", 00:20:17.129 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:17.129 "is_configured": true, 00:20:17.129 "data_offset": 2048, 00:20:17.129 "data_size": 63488 00:20:17.129 }, 00:20:17.129 { 00:20:17.129 "name": "BaseBdev2", 00:20:17.129 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:17.129 "is_configured": true, 00:20:17.129 "data_offset": 2048, 00:20:17.129 "data_size": 63488 00:20:17.129 } 00:20:17.129 ] 00:20:17.129 }' 00:20:17.129 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.129 23:01:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.389 "name": "raid_bdev1", 00:20:17.389 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:17.389 "strip_size_kb": 0, 00:20:17.389 "state": "online", 00:20:17.389 "raid_level": "raid1", 00:20:17.389 "superblock": true, 00:20:17.389 "num_base_bdevs": 2, 00:20:17.389 "num_base_bdevs_discovered": 2, 00:20:17.389 "num_base_bdevs_operational": 2, 00:20:17.389 "base_bdevs_list": [ 00:20:17.389 { 00:20:17.389 "name": "spare", 00:20:17.389 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:17.389 "is_configured": true, 00:20:17.389 "data_offset": 2048, 00:20:17.389 "data_size": 63488 00:20:17.389 }, 00:20:17.389 { 00:20:17.389 "name": "BaseBdev2", 00:20:17.389 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:17.389 "is_configured": true, 00:20:17.389 "data_offset": 2048, 00:20:17.389 "data_size": 63488 00:20:17.389 } 00:20:17.389 ] 00:20:17.389 }' 00:20:17.389 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.649 [2024-12-09 23:01:33.365299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.649 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.650 "name": "raid_bdev1", 00:20:17.650 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:17.650 "strip_size_kb": 0, 00:20:17.650 "state": "online", 00:20:17.650 "raid_level": "raid1", 00:20:17.650 "superblock": true, 00:20:17.650 "num_base_bdevs": 2, 00:20:17.650 "num_base_bdevs_discovered": 1, 00:20:17.650 "num_base_bdevs_operational": 1, 00:20:17.650 "base_bdevs_list": [ 00:20:17.650 { 00:20:17.650 "name": null, 00:20:17.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.650 "is_configured": false, 00:20:17.650 "data_offset": 0, 00:20:17.650 "data_size": 63488 00:20:17.650 }, 00:20:17.650 { 00:20:17.650 "name": "BaseBdev2", 00:20:17.650 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:17.650 "is_configured": true, 00:20:17.650 "data_offset": 2048, 00:20:17.650 "data_size": 63488 00:20:17.650 } 00:20:17.650 ] 00:20:17.650 }' 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.650 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.235 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.235 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.235 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.235 [2024-12-09 23:01:33.816642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.235 [2024-12-09 23:01:33.816931] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:18.235 [2024-12-09 23:01:33.816999] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:18.235 [2024-12-09 23:01:33.817075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.235 [2024-12-09 23:01:33.836227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:20:18.235 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.235 23:01:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:18.235 [2024-12-09 23:01:33.838467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.174 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.174 "name": "raid_bdev1", 00:20:19.174 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:19.174 "strip_size_kb": 0, 00:20:19.174 "state": "online", 00:20:19.174 "raid_level": "raid1", 00:20:19.174 "superblock": true, 00:20:19.174 "num_base_bdevs": 2, 00:20:19.174 "num_base_bdevs_discovered": 2, 00:20:19.174 "num_base_bdevs_operational": 2, 00:20:19.174 "process": { 00:20:19.174 "type": "rebuild", 00:20:19.174 "target": "spare", 00:20:19.174 "progress": { 00:20:19.174 "blocks": 20480, 00:20:19.174 "percent": 32 00:20:19.174 } 00:20:19.174 }, 00:20:19.174 "base_bdevs_list": [ 00:20:19.174 { 00:20:19.174 "name": "spare", 00:20:19.174 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:19.174 "is_configured": true, 00:20:19.174 "data_offset": 2048, 00:20:19.174 "data_size": 63488 00:20:19.174 }, 00:20:19.174 { 00:20:19.174 "name": "BaseBdev2", 00:20:19.174 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:19.175 "is_configured": true, 00:20:19.175 "data_offset": 2048, 00:20:19.175 "data_size": 63488 00:20:19.175 } 00:20:19.175 ] 00:20:19.175 }' 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.175 23:01:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.175 [2024-12-09 23:01:34.982243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.434 [2024-12-09 23:01:35.044532] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:19.434 [2024-12-09 23:01:35.044664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.434 [2024-12-09 23:01:35.044713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.434 [2024-12-09 23:01:35.044737] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.434 "name": "raid_bdev1", 00:20:19.434 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:19.434 "strip_size_kb": 0, 00:20:19.434 "state": "online", 00:20:19.434 "raid_level": "raid1", 00:20:19.434 "superblock": true, 00:20:19.434 "num_base_bdevs": 2, 00:20:19.434 "num_base_bdevs_discovered": 1, 00:20:19.434 "num_base_bdevs_operational": 1, 00:20:19.434 "base_bdevs_list": [ 00:20:19.434 { 00:20:19.434 "name": null, 00:20:19.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.434 "is_configured": false, 00:20:19.434 "data_offset": 0, 00:20:19.434 "data_size": 63488 00:20:19.434 }, 00:20:19.434 { 00:20:19.434 "name": "BaseBdev2", 00:20:19.434 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:19.434 "is_configured": true, 00:20:19.434 "data_offset": 2048, 00:20:19.434 "data_size": 63488 00:20:19.434 } 00:20:19.434 ] 00:20:19.434 }' 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.434 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.693 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:19.693 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.693 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.693 [2024-12-09 23:01:35.529585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:19.693 [2024-12-09 23:01:35.529658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.693 [2024-12-09 23:01:35.529682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:19.693 [2024-12-09 23:01:35.529692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.693 [2024-12-09 23:01:35.530179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.693 [2024-12-09 23:01:35.530198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:19.693 [2024-12-09 23:01:35.530318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:19.693 [2024-12-09 23:01:35.530331] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.693 [2024-12-09 23:01:35.530344] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:19.693 [2024-12-09 23:01:35.530364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.693 [2024-12-09 23:01:35.547134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:20:19.952 spare 00:20:19.952 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.952 [2024-12-09 23:01:35.549026] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.953 23:01:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.894 "name": "raid_bdev1", 00:20:20.894 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:20.894 "strip_size_kb": 0, 00:20:20.894 "state": "online", 00:20:20.894 "raid_level": "raid1", 00:20:20.894 "superblock": true, 00:20:20.894 "num_base_bdevs": 2, 00:20:20.894 "num_base_bdevs_discovered": 2, 00:20:20.894 "num_base_bdevs_operational": 2, 00:20:20.894 "process": { 00:20:20.894 "type": "rebuild", 00:20:20.894 "target": "spare", 00:20:20.894 "progress": { 00:20:20.894 "blocks": 20480, 00:20:20.894 "percent": 32 00:20:20.894 } 00:20:20.894 }, 00:20:20.894 "base_bdevs_list": [ 00:20:20.894 { 00:20:20.894 "name": "spare", 00:20:20.894 "uuid": "3d5c2069-0230-56cd-82a5-bb6a2bab7182", 00:20:20.894 "is_configured": true, 00:20:20.894 "data_offset": 2048, 00:20:20.894 "data_size": 63488 00:20:20.894 }, 00:20:20.894 { 00:20:20.894 "name": "BaseBdev2", 00:20:20.894 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:20.894 "is_configured": true, 00:20:20.894 "data_offset": 2048, 00:20:20.894 "data_size": 63488 00:20:20.894 } 00:20:20.894 ] 00:20:20.894 }' 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.894 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.894 [2024-12-09 23:01:36.708706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.154 [2024-12-09 23:01:36.754953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.154 [2024-12-09 23:01:36.755074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.154 [2024-12-09 23:01:36.755091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.154 [2024-12-09 23:01:36.755100] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.154 "name": "raid_bdev1", 00:20:21.154 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:21.154 "strip_size_kb": 0, 00:20:21.154 "state": "online", 00:20:21.154 "raid_level": "raid1", 00:20:21.154 "superblock": true, 00:20:21.154 "num_base_bdevs": 2, 00:20:21.154 "num_base_bdevs_discovered": 1, 00:20:21.154 "num_base_bdevs_operational": 1, 00:20:21.154 "base_bdevs_list": [ 00:20:21.154 { 00:20:21.154 "name": null, 00:20:21.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.154 "is_configured": false, 00:20:21.154 "data_offset": 0, 00:20:21.154 "data_size": 63488 00:20:21.154 }, 00:20:21.154 { 00:20:21.154 "name": "BaseBdev2", 00:20:21.154 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:21.154 "is_configured": true, 00:20:21.154 "data_offset": 2048, 00:20:21.154 "data_size": 63488 00:20:21.154 } 00:20:21.154 ] 00:20:21.154 }' 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.154 23:01:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.417 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.681 "name": "raid_bdev1", 00:20:21.681 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:21.681 "strip_size_kb": 0, 00:20:21.681 "state": "online", 00:20:21.681 "raid_level": "raid1", 00:20:21.681 "superblock": true, 00:20:21.681 "num_base_bdevs": 2, 00:20:21.681 "num_base_bdevs_discovered": 1, 00:20:21.681 "num_base_bdevs_operational": 1, 00:20:21.681 "base_bdevs_list": [ 00:20:21.681 { 00:20:21.681 "name": null, 00:20:21.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.681 "is_configured": false, 00:20:21.681 "data_offset": 0, 00:20:21.681 "data_size": 63488 00:20:21.681 }, 00:20:21.681 { 00:20:21.681 "name": "BaseBdev2", 00:20:21.681 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:21.681 "is_configured": true, 00:20:21.681 "data_offset": 2048, 00:20:21.681 "data_size": 63488 00:20:21.681 } 00:20:21.681 ] 00:20:21.681 }' 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.681 [2024-12-09 23:01:37.421967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:21.681 [2024-12-09 23:01:37.422036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.681 [2024-12-09 23:01:37.422058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:21.681 [2024-12-09 23:01:37.422070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.681 [2024-12-09 23:01:37.422583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.681 [2024-12-09 23:01:37.422618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:21.681 [2024-12-09 23:01:37.422712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:21.681 [2024-12-09 23:01:37.422735] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.681 [2024-12-09 23:01:37.422743] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:21.681 [2024-12-09 23:01:37.422756] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:21.681 BaseBdev1 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.681 23:01:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.633 "name": "raid_bdev1", 00:20:22.633 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:22.633 "strip_size_kb": 0, 00:20:22.633 "state": "online", 00:20:22.633 "raid_level": "raid1", 00:20:22.633 "superblock": true, 00:20:22.633 "num_base_bdevs": 2, 00:20:22.633 "num_base_bdevs_discovered": 1, 00:20:22.633 "num_base_bdevs_operational": 1, 00:20:22.633 "base_bdevs_list": [ 00:20:22.633 { 00:20:22.633 "name": null, 00:20:22.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.633 "is_configured": false, 00:20:22.633 "data_offset": 0, 00:20:22.633 "data_size": 63488 00:20:22.633 }, 00:20:22.633 { 00:20:22.633 "name": "BaseBdev2", 00:20:22.633 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:22.633 "is_configured": true, 00:20:22.633 "data_offset": 2048, 00:20:22.633 "data_size": 63488 00:20:22.633 } 00:20:22.633 ] 00:20:22.633 }' 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.633 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.223 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.223 "name": "raid_bdev1", 00:20:23.223 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:23.223 "strip_size_kb": 0, 00:20:23.223 "state": "online", 00:20:23.223 "raid_level": "raid1", 00:20:23.223 "superblock": true, 00:20:23.223 "num_base_bdevs": 2, 00:20:23.223 "num_base_bdevs_discovered": 1, 00:20:23.223 "num_base_bdevs_operational": 1, 00:20:23.223 "base_bdevs_list": [ 00:20:23.224 { 00:20:23.224 "name": null, 00:20:23.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.224 "is_configured": false, 00:20:23.224 "data_offset": 0, 00:20:23.224 "data_size": 63488 00:20:23.224 }, 00:20:23.224 { 00:20:23.224 "name": "BaseBdev2", 00:20:23.224 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:23.224 "is_configured": true, 00:20:23.224 "data_offset": 2048, 00:20:23.224 "data_size": 63488 00:20:23.224 } 00:20:23.224 ] 00:20:23.224 }' 00:20:23.224 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.224 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:23.224 23:01:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:23.224 [2024-12-09 23:01:39.011675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.224 [2024-12-09 23:01:39.011861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:23.224 [2024-12-09 23:01:39.011876] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:23.224 request: 00:20:23.224 { 00:20:23.224 "base_bdev": "BaseBdev1", 00:20:23.224 "raid_bdev": "raid_bdev1", 00:20:23.224 "method": "bdev_raid_add_base_bdev", 00:20:23.224 "req_id": 1 00:20:23.224 } 00:20:23.224 Got JSON-RPC error response 00:20:23.224 response: 00:20:23.224 { 00:20:23.224 "code": -22, 00:20:23.224 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:23.224 } 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.224 23:01:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.607 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.607 "name": "raid_bdev1", 00:20:24.607 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:24.607 "strip_size_kb": 0, 00:20:24.607 "state": "online", 00:20:24.607 "raid_level": "raid1", 00:20:24.607 "superblock": true, 00:20:24.607 "num_base_bdevs": 2, 00:20:24.608 "num_base_bdevs_discovered": 1, 00:20:24.608 "num_base_bdevs_operational": 1, 00:20:24.608 "base_bdevs_list": [ 00:20:24.608 { 00:20:24.608 "name": null, 00:20:24.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.608 "is_configured": false, 00:20:24.608 "data_offset": 0, 00:20:24.608 "data_size": 63488 00:20:24.608 }, 00:20:24.608 { 00:20:24.608 "name": "BaseBdev2", 00:20:24.608 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:24.608 "is_configured": true, 00:20:24.608 "data_offset": 2048, 00:20:24.608 "data_size": 63488 00:20:24.608 } 00:20:24.608 ] 00:20:24.608 }' 00:20:24.608 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.608 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.608 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.608 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.608 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.867 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.868 "name": "raid_bdev1", 00:20:24.868 "uuid": "977636c3-30d9-43bd-b654-e8dfc4d83726", 00:20:24.868 "strip_size_kb": 0, 00:20:24.868 "state": "online", 00:20:24.868 "raid_level": "raid1", 00:20:24.868 "superblock": true, 00:20:24.868 "num_base_bdevs": 2, 00:20:24.868 "num_base_bdevs_discovered": 1, 00:20:24.868 "num_base_bdevs_operational": 1, 00:20:24.868 "base_bdevs_list": [ 00:20:24.868 { 00:20:24.868 "name": null, 00:20:24.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.868 "is_configured": false, 00:20:24.868 "data_offset": 0, 00:20:24.868 "data_size": 63488 00:20:24.868 }, 00:20:24.868 { 00:20:24.868 "name": "BaseBdev2", 00:20:24.868 "uuid": "61b7075e-b2b3-51f1-b005-0a35d554518b", 00:20:24.868 "is_configured": true, 00:20:24.868 "data_offset": 2048, 00:20:24.868 "data_size": 63488 00:20:24.868 } 00:20:24.868 ] 00:20:24.868 }' 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77470 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77470 ']' 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77470 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77470 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.868 killing process with pid 77470 00:20:24.868 Received shutdown signal, test time was about 17.299415 seconds 00:20:24.868 00:20:24.868 Latency(us) 00:20:24.868 [2024-12-09T23:01:40.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.868 [2024-12-09T23:01:40.724Z] =================================================================================================================== 00:20:24.868 [2024-12-09T23:01:40.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77470' 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77470 00:20:24.868 [2024-12-09 23:01:40.617615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.868 [2024-12-09 23:01:40.617755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.868 23:01:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77470 00:20:24.868 [2024-12-09 23:01:40.617819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.868 [2024-12-09 23:01:40.617830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:25.128 [2024-12-09 23:01:40.869125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:26.507 00:20:26.507 real 0m20.708s 00:20:26.507 user 0m27.145s 00:20:26.507 sys 0m2.252s 00:20:26.507 ************************************ 00:20:26.507 END TEST raid_rebuild_test_sb_io 00:20:26.507 ************************************ 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.507 23:01:42 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:20:26.507 23:01:42 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:26.507 23:01:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:26.507 23:01:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.507 23:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.507 ************************************ 00:20:26.507 START TEST raid_rebuild_test 00:20:26.507 ************************************ 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78170 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78170 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78170 ']' 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.507 23:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.767 [2024-12-09 23:01:42.379193] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:20:26.767 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.767 Zero copy mechanism will not be used. 00:20:26.767 [2024-12-09 23:01:42.379943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78170 ] 00:20:26.767 [2024-12-09 23:01:42.556919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.027 [2024-12-09 23:01:42.693670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.285 [2024-12-09 23:01:42.921416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.285 [2024-12-09 23:01:42.921500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.544 BaseBdev1_malloc 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.544 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.544 [2024-12-09 23:01:43.285612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:27.544 [2024-12-09 23:01:43.285672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.544 [2024-12-09 23:01:43.285702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:27.544 [2024-12-09 23:01:43.285713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.544 [2024-12-09 23:01:43.287709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.544 [2024-12-09 23:01:43.287749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:27.544 BaseBdev1 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.545 BaseBdev2_malloc 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.545 [2024-12-09 23:01:43.341161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:27.545 [2024-12-09 23:01:43.341293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.545 [2024-12-09 23:01:43.341321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:27.545 [2024-12-09 23:01:43.341337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.545 [2024-12-09 23:01:43.343644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.545 [2024-12-09 23:01:43.343681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:27.545 BaseBdev2 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.545 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.804 BaseBdev3_malloc 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.804 [2024-12-09 23:01:43.407792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:27.804 [2024-12-09 23:01:43.407853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.804 [2024-12-09 23:01:43.407876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:27.804 [2024-12-09 23:01:43.407887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.804 [2024-12-09 23:01:43.410197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.804 [2024-12-09 23:01:43.410243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:27.804 BaseBdev3 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.804 BaseBdev4_malloc 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.804 [2024-12-09 23:01:43.461885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:27.804 [2024-12-09 23:01:43.461944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.804 [2024-12-09 23:01:43.461965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:27.804 [2024-12-09 23:01:43.461975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.804 [2024-12-09 23:01:43.464011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.804 [2024-12-09 23:01:43.464051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:27.804 BaseBdev4 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.804 spare_malloc 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:27.804 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.805 spare_delay 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.805 [2024-12-09 23:01:43.528701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.805 [2024-12-09 23:01:43.528791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.805 [2024-12-09 23:01:43.528811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:27.805 [2024-12-09 23:01:43.528822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.805 [2024-12-09 23:01:43.530777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.805 [2024-12-09 23:01:43.530814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.805 spare 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.805 [2024-12-09 23:01:43.540724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.805 [2024-12-09 23:01:43.542454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.805 [2024-12-09 23:01:43.542531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.805 [2024-12-09 23:01:43.542583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:27.805 [2024-12-09 23:01:43.542663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:27.805 [2024-12-09 23:01:43.542676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:27.805 [2024-12-09 23:01:43.542927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:27.805 [2024-12-09 23:01:43.543100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:27.805 [2024-12-09 23:01:43.543112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:27.805 [2024-12-09 23:01:43.543268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.805 "name": "raid_bdev1", 00:20:27.805 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:27.805 "strip_size_kb": 0, 00:20:27.805 "state": "online", 00:20:27.805 "raid_level": "raid1", 00:20:27.805 "superblock": false, 00:20:27.805 "num_base_bdevs": 4, 00:20:27.805 "num_base_bdevs_discovered": 4, 00:20:27.805 "num_base_bdevs_operational": 4, 00:20:27.805 "base_bdevs_list": [ 00:20:27.805 { 00:20:27.805 "name": "BaseBdev1", 00:20:27.805 "uuid": "16e86c72-14fc-5a5f-bc9c-80ae2981da30", 00:20:27.805 "is_configured": true, 00:20:27.805 "data_offset": 0, 00:20:27.805 "data_size": 65536 00:20:27.805 }, 00:20:27.805 { 00:20:27.805 "name": "BaseBdev2", 00:20:27.805 "uuid": "4a4f59d7-6af5-5b0d-b1ae-e021ad262dd2", 00:20:27.805 "is_configured": true, 00:20:27.805 "data_offset": 0, 00:20:27.805 "data_size": 65536 00:20:27.805 }, 00:20:27.805 { 00:20:27.805 "name": "BaseBdev3", 00:20:27.805 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:27.805 "is_configured": true, 00:20:27.805 "data_offset": 0, 00:20:27.805 "data_size": 65536 00:20:27.805 }, 00:20:27.805 { 00:20:27.805 "name": "BaseBdev4", 00:20:27.805 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:27.805 "is_configured": true, 00:20:27.805 "data_offset": 0, 00:20:27.805 "data_size": 65536 00:20:27.805 } 00:20:27.805 ] 00:20:27.805 }' 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.805 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.376 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.376 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.376 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.376 23:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:28.376 [2024-12-09 23:01:43.976531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.376 23:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:28.376 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:28.636 [2024-12-09 23:01:44.263713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:28.636 /dev/nbd0 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.636 1+0 records in 00:20:28.636 1+0 records out 00:20:28.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015416 s, 2.7 MB/s 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:28.636 23:01:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:35.220 65536+0 records in 00:20:35.220 65536+0 records out 00:20:35.220 33554432 bytes (34 MB, 32 MiB) copied, 6.59068 s, 5.1 MB/s 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.220 23:01:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:35.479 [2024-12-09 23:01:51.160583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.479 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:35.479 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:35.479 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.480 [2024-12-09 23:01:51.200655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.480 "name": "raid_bdev1", 00:20:35.480 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:35.480 "strip_size_kb": 0, 00:20:35.480 "state": "online", 00:20:35.480 "raid_level": "raid1", 00:20:35.480 "superblock": false, 00:20:35.480 "num_base_bdevs": 4, 00:20:35.480 "num_base_bdevs_discovered": 3, 00:20:35.480 "num_base_bdevs_operational": 3, 00:20:35.480 "base_bdevs_list": [ 00:20:35.480 { 00:20:35.480 "name": null, 00:20:35.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.480 "is_configured": false, 00:20:35.480 "data_offset": 0, 00:20:35.480 "data_size": 65536 00:20:35.480 }, 00:20:35.480 { 00:20:35.480 "name": "BaseBdev2", 00:20:35.480 "uuid": "4a4f59d7-6af5-5b0d-b1ae-e021ad262dd2", 00:20:35.480 "is_configured": true, 00:20:35.480 "data_offset": 0, 00:20:35.480 "data_size": 65536 00:20:35.480 }, 00:20:35.480 { 00:20:35.480 "name": "BaseBdev3", 00:20:35.480 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:35.480 "is_configured": true, 00:20:35.480 "data_offset": 0, 00:20:35.480 "data_size": 65536 00:20:35.480 }, 00:20:35.480 { 00:20:35.480 "name": "BaseBdev4", 00:20:35.480 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:35.480 "is_configured": true, 00:20:35.480 "data_offset": 0, 00:20:35.480 "data_size": 65536 00:20:35.480 } 00:20:35.480 ] 00:20:35.480 }' 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.480 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.048 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.048 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.048 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.048 [2024-12-09 23:01:51.664638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.048 [2024-12-09 23:01:51.683048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:20:36.048 23:01:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.048 23:01:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:36.048 [2024-12-09 23:01:51.685225] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.987 "name": "raid_bdev1", 00:20:36.987 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:36.987 "strip_size_kb": 0, 00:20:36.987 "state": "online", 00:20:36.987 "raid_level": "raid1", 00:20:36.987 "superblock": false, 00:20:36.987 "num_base_bdevs": 4, 00:20:36.987 "num_base_bdevs_discovered": 4, 00:20:36.987 "num_base_bdevs_operational": 4, 00:20:36.987 "process": { 00:20:36.987 "type": "rebuild", 00:20:36.987 "target": "spare", 00:20:36.987 "progress": { 00:20:36.987 "blocks": 20480, 00:20:36.987 "percent": 31 00:20:36.987 } 00:20:36.987 }, 00:20:36.987 "base_bdevs_list": [ 00:20:36.987 { 00:20:36.987 "name": "spare", 00:20:36.987 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:36.987 "is_configured": true, 00:20:36.987 "data_offset": 0, 00:20:36.987 "data_size": 65536 00:20:36.987 }, 00:20:36.987 { 00:20:36.987 "name": "BaseBdev2", 00:20:36.987 "uuid": "4a4f59d7-6af5-5b0d-b1ae-e021ad262dd2", 00:20:36.987 "is_configured": true, 00:20:36.987 "data_offset": 0, 00:20:36.987 "data_size": 65536 00:20:36.987 }, 00:20:36.987 { 00:20:36.987 "name": "BaseBdev3", 00:20:36.987 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:36.987 "is_configured": true, 00:20:36.987 "data_offset": 0, 00:20:36.987 "data_size": 65536 00:20:36.987 }, 00:20:36.987 { 00:20:36.987 "name": "BaseBdev4", 00:20:36.987 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:36.987 "is_configured": true, 00:20:36.987 "data_offset": 0, 00:20:36.987 "data_size": 65536 00:20:36.987 } 00:20:36.987 ] 00:20:36.987 }' 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.987 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.247 [2024-12-09 23:01:52.844638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.247 [2024-12-09 23:01:52.891186] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.247 [2024-12-09 23:01:52.891276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.247 [2024-12-09 23:01:52.891296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.247 [2024-12-09 23:01:52.891307] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.247 "name": "raid_bdev1", 00:20:37.247 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:37.247 "strip_size_kb": 0, 00:20:37.247 "state": "online", 00:20:37.247 "raid_level": "raid1", 00:20:37.247 "superblock": false, 00:20:37.247 "num_base_bdevs": 4, 00:20:37.247 "num_base_bdevs_discovered": 3, 00:20:37.247 "num_base_bdevs_operational": 3, 00:20:37.247 "base_bdevs_list": [ 00:20:37.247 { 00:20:37.247 "name": null, 00:20:37.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.247 "is_configured": false, 00:20:37.247 "data_offset": 0, 00:20:37.247 "data_size": 65536 00:20:37.247 }, 00:20:37.247 { 00:20:37.247 "name": "BaseBdev2", 00:20:37.247 "uuid": "4a4f59d7-6af5-5b0d-b1ae-e021ad262dd2", 00:20:37.247 "is_configured": true, 00:20:37.247 "data_offset": 0, 00:20:37.247 "data_size": 65536 00:20:37.247 }, 00:20:37.247 { 00:20:37.247 "name": "BaseBdev3", 00:20:37.247 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:37.247 "is_configured": true, 00:20:37.247 "data_offset": 0, 00:20:37.247 "data_size": 65536 00:20:37.247 }, 00:20:37.247 { 00:20:37.247 "name": "BaseBdev4", 00:20:37.247 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:37.247 "is_configured": true, 00:20:37.247 "data_offset": 0, 00:20:37.247 "data_size": 65536 00:20:37.247 } 00:20:37.247 ] 00:20:37.247 }' 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.247 23:01:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.817 "name": "raid_bdev1", 00:20:37.817 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:37.817 "strip_size_kb": 0, 00:20:37.817 "state": "online", 00:20:37.817 "raid_level": "raid1", 00:20:37.817 "superblock": false, 00:20:37.817 "num_base_bdevs": 4, 00:20:37.817 "num_base_bdevs_discovered": 3, 00:20:37.817 "num_base_bdevs_operational": 3, 00:20:37.817 "base_bdevs_list": [ 00:20:37.817 { 00:20:37.817 "name": null, 00:20:37.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.817 "is_configured": false, 00:20:37.817 "data_offset": 0, 00:20:37.817 "data_size": 65536 00:20:37.817 }, 00:20:37.817 { 00:20:37.817 "name": "BaseBdev2", 00:20:37.817 "uuid": "4a4f59d7-6af5-5b0d-b1ae-e021ad262dd2", 00:20:37.817 "is_configured": true, 00:20:37.817 "data_offset": 0, 00:20:37.817 "data_size": 65536 00:20:37.817 }, 00:20:37.817 { 00:20:37.817 "name": "BaseBdev3", 00:20:37.817 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:37.817 "is_configured": true, 00:20:37.817 "data_offset": 0, 00:20:37.817 "data_size": 65536 00:20:37.817 }, 00:20:37.817 { 00:20:37.817 "name": "BaseBdev4", 00:20:37.817 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:37.817 "is_configured": true, 00:20:37.817 "data_offset": 0, 00:20:37.817 "data_size": 65536 00:20:37.817 } 00:20:37.817 ] 00:20:37.817 }' 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.817 [2024-12-09 23:01:53.588630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:37.817 [2024-12-09 23:01:53.604149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.817 23:01:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:37.817 [2024-12-09 23:01:53.606203] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.205 "name": "raid_bdev1", 00:20:39.205 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:39.205 "strip_size_kb": 0, 00:20:39.205 "state": "online", 00:20:39.205 "raid_level": "raid1", 00:20:39.205 "superblock": false, 00:20:39.205 "num_base_bdevs": 4, 00:20:39.205 "num_base_bdevs_discovered": 4, 00:20:39.205 "num_base_bdevs_operational": 4, 00:20:39.205 "process": { 00:20:39.205 "type": "rebuild", 00:20:39.205 "target": "spare", 00:20:39.205 "progress": { 00:20:39.205 "blocks": 20480, 00:20:39.205 "percent": 31 00:20:39.205 } 00:20:39.205 }, 00:20:39.205 "base_bdevs_list": [ 00:20:39.205 { 00:20:39.205 "name": "spare", 00:20:39.205 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:39.205 "is_configured": true, 00:20:39.205 "data_offset": 0, 00:20:39.205 "data_size": 65536 00:20:39.205 }, 00:20:39.205 { 00:20:39.205 "name": "BaseBdev2", 00:20:39.205 "uuid": "4a4f59d7-6af5-5b0d-b1ae-e021ad262dd2", 00:20:39.205 "is_configured": true, 00:20:39.205 "data_offset": 0, 00:20:39.205 "data_size": 65536 00:20:39.205 }, 00:20:39.205 { 00:20:39.205 "name": "BaseBdev3", 00:20:39.205 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:39.205 "is_configured": true, 00:20:39.205 "data_offset": 0, 00:20:39.205 "data_size": 65536 00:20:39.205 }, 00:20:39.205 { 00:20:39.205 "name": "BaseBdev4", 00:20:39.205 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:39.205 "is_configured": true, 00:20:39.205 "data_offset": 0, 00:20:39.205 "data_size": 65536 00:20:39.205 } 00:20:39.205 ] 00:20:39.205 }' 00:20:39.205 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.206 [2024-12-09 23:01:54.769761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:39.206 [2024-12-09 23:01:54.812091] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.206 "name": "raid_bdev1", 00:20:39.206 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:39.206 "strip_size_kb": 0, 00:20:39.206 "state": "online", 00:20:39.206 "raid_level": "raid1", 00:20:39.206 "superblock": false, 00:20:39.206 "num_base_bdevs": 4, 00:20:39.206 "num_base_bdevs_discovered": 3, 00:20:39.206 "num_base_bdevs_operational": 3, 00:20:39.206 "process": { 00:20:39.206 "type": "rebuild", 00:20:39.206 "target": "spare", 00:20:39.206 "progress": { 00:20:39.206 "blocks": 24576, 00:20:39.206 "percent": 37 00:20:39.206 } 00:20:39.206 }, 00:20:39.206 "base_bdevs_list": [ 00:20:39.206 { 00:20:39.206 "name": "spare", 00:20:39.206 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": null, 00:20:39.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.206 "is_configured": false, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev3", 00:20:39.206 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev4", 00:20:39.206 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 } 00:20:39.206 ] 00:20:39.206 }' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=472 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.206 23:01:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.206 23:01:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.206 "name": "raid_bdev1", 00:20:39.206 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:39.206 "strip_size_kb": 0, 00:20:39.206 "state": "online", 00:20:39.206 "raid_level": "raid1", 00:20:39.206 "superblock": false, 00:20:39.206 "num_base_bdevs": 4, 00:20:39.206 "num_base_bdevs_discovered": 3, 00:20:39.206 "num_base_bdevs_operational": 3, 00:20:39.206 "process": { 00:20:39.206 "type": "rebuild", 00:20:39.206 "target": "spare", 00:20:39.206 "progress": { 00:20:39.206 "blocks": 26624, 00:20:39.206 "percent": 40 00:20:39.206 } 00:20:39.206 }, 00:20:39.206 "base_bdevs_list": [ 00:20:39.206 { 00:20:39.206 "name": "spare", 00:20:39.206 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": null, 00:20:39.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.206 "is_configured": false, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev3", 00:20:39.206 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 }, 00:20:39.206 { 00:20:39.206 "name": "BaseBdev4", 00:20:39.206 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:39.206 "is_configured": true, 00:20:39.206 "data_offset": 0, 00:20:39.206 "data_size": 65536 00:20:39.206 } 00:20:39.206 ] 00:20:39.206 }' 00:20:39.206 23:01:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.486 23:01:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.486 23:01:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.486 23:01:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.486 23:01:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.423 "name": "raid_bdev1", 00:20:40.423 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:40.423 "strip_size_kb": 0, 00:20:40.423 "state": "online", 00:20:40.423 "raid_level": "raid1", 00:20:40.423 "superblock": false, 00:20:40.423 "num_base_bdevs": 4, 00:20:40.423 "num_base_bdevs_discovered": 3, 00:20:40.423 "num_base_bdevs_operational": 3, 00:20:40.423 "process": { 00:20:40.423 "type": "rebuild", 00:20:40.423 "target": "spare", 00:20:40.423 "progress": { 00:20:40.423 "blocks": 51200, 00:20:40.423 "percent": 78 00:20:40.423 } 00:20:40.423 }, 00:20:40.423 "base_bdevs_list": [ 00:20:40.423 { 00:20:40.423 "name": "spare", 00:20:40.423 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:40.423 "is_configured": true, 00:20:40.423 "data_offset": 0, 00:20:40.423 "data_size": 65536 00:20:40.423 }, 00:20:40.423 { 00:20:40.423 "name": null, 00:20:40.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.423 "is_configured": false, 00:20:40.423 "data_offset": 0, 00:20:40.423 "data_size": 65536 00:20:40.423 }, 00:20:40.423 { 00:20:40.423 "name": "BaseBdev3", 00:20:40.423 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:40.423 "is_configured": true, 00:20:40.423 "data_offset": 0, 00:20:40.423 "data_size": 65536 00:20:40.423 }, 00:20:40.423 { 00:20:40.423 "name": "BaseBdev4", 00:20:40.423 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:40.423 "is_configured": true, 00:20:40.423 "data_offset": 0, 00:20:40.423 "data_size": 65536 00:20:40.423 } 00:20:40.423 ] 00:20:40.423 }' 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.423 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.682 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.682 23:01:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.249 [2024-12-09 23:01:56.821992] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:41.249 [2024-12-09 23:01:56.822177] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:41.249 [2024-12-09 23:01:56.822269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.508 "name": "raid_bdev1", 00:20:41.508 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:41.508 "strip_size_kb": 0, 00:20:41.508 "state": "online", 00:20:41.508 "raid_level": "raid1", 00:20:41.508 "superblock": false, 00:20:41.508 "num_base_bdevs": 4, 00:20:41.508 "num_base_bdevs_discovered": 3, 00:20:41.508 "num_base_bdevs_operational": 3, 00:20:41.508 "base_bdevs_list": [ 00:20:41.508 { 00:20:41.508 "name": "spare", 00:20:41.508 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:41.508 "is_configured": true, 00:20:41.508 "data_offset": 0, 00:20:41.508 "data_size": 65536 00:20:41.508 }, 00:20:41.508 { 00:20:41.508 "name": null, 00:20:41.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.508 "is_configured": false, 00:20:41.508 "data_offset": 0, 00:20:41.508 "data_size": 65536 00:20:41.508 }, 00:20:41.508 { 00:20:41.508 "name": "BaseBdev3", 00:20:41.508 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:41.508 "is_configured": true, 00:20:41.508 "data_offset": 0, 00:20:41.508 "data_size": 65536 00:20:41.508 }, 00:20:41.508 { 00:20:41.508 "name": "BaseBdev4", 00:20:41.508 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:41.508 "is_configured": true, 00:20:41.508 "data_offset": 0, 00:20:41.508 "data_size": 65536 00:20:41.508 } 00:20:41.508 ] 00:20:41.508 }' 00:20:41.508 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.768 "name": "raid_bdev1", 00:20:41.768 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:41.768 "strip_size_kb": 0, 00:20:41.768 "state": "online", 00:20:41.768 "raid_level": "raid1", 00:20:41.768 "superblock": false, 00:20:41.768 "num_base_bdevs": 4, 00:20:41.768 "num_base_bdevs_discovered": 3, 00:20:41.768 "num_base_bdevs_operational": 3, 00:20:41.768 "base_bdevs_list": [ 00:20:41.768 { 00:20:41.768 "name": "spare", 00:20:41.768 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:41.768 "is_configured": true, 00:20:41.768 "data_offset": 0, 00:20:41.768 "data_size": 65536 00:20:41.768 }, 00:20:41.768 { 00:20:41.768 "name": null, 00:20:41.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.768 "is_configured": false, 00:20:41.768 "data_offset": 0, 00:20:41.768 "data_size": 65536 00:20:41.768 }, 00:20:41.768 { 00:20:41.768 "name": "BaseBdev3", 00:20:41.768 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:41.768 "is_configured": true, 00:20:41.768 "data_offset": 0, 00:20:41.768 "data_size": 65536 00:20:41.768 }, 00:20:41.768 { 00:20:41.768 "name": "BaseBdev4", 00:20:41.768 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:41.768 "is_configured": true, 00:20:41.768 "data_offset": 0, 00:20:41.768 "data_size": 65536 00:20:41.768 } 00:20:41.768 ] 00:20:41.768 }' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.768 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.028 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.028 "name": "raid_bdev1", 00:20:42.028 "uuid": "e3917e5b-3989-462b-8525-9b9b9ee759bc", 00:20:42.028 "strip_size_kb": 0, 00:20:42.028 "state": "online", 00:20:42.028 "raid_level": "raid1", 00:20:42.028 "superblock": false, 00:20:42.028 "num_base_bdevs": 4, 00:20:42.028 "num_base_bdevs_discovered": 3, 00:20:42.028 "num_base_bdevs_operational": 3, 00:20:42.028 "base_bdevs_list": [ 00:20:42.028 { 00:20:42.028 "name": "spare", 00:20:42.028 "uuid": "6307c76d-5839-5087-a2c8-57d7bd6b438e", 00:20:42.028 "is_configured": true, 00:20:42.028 "data_offset": 0, 00:20:42.028 "data_size": 65536 00:20:42.028 }, 00:20:42.028 { 00:20:42.028 "name": null, 00:20:42.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.028 "is_configured": false, 00:20:42.028 "data_offset": 0, 00:20:42.028 "data_size": 65536 00:20:42.028 }, 00:20:42.028 { 00:20:42.028 "name": "BaseBdev3", 00:20:42.028 "uuid": "dec4ddb9-9b36-5c92-9952-a2794c3267ea", 00:20:42.028 "is_configured": true, 00:20:42.028 "data_offset": 0, 00:20:42.028 "data_size": 65536 00:20:42.028 }, 00:20:42.028 { 00:20:42.028 "name": "BaseBdev4", 00:20:42.028 "uuid": "ad1e647f-b208-5ba3-8d64-7379ccb8158f", 00:20:42.028 "is_configured": true, 00:20:42.028 "data_offset": 0, 00:20:42.028 "data_size": 65536 00:20:42.028 } 00:20:42.028 ] 00:20:42.028 }' 00:20:42.028 23:01:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.028 23:01:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.287 [2024-12-09 23:01:58.058356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.287 [2024-12-09 23:01:58.058523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.287 [2024-12-09 23:01:58.058697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.287 [2024-12-09 23:01:58.058856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.287 [2024-12-09 23:01:58.058924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.287 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:42.546 /dev/nbd0 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.546 1+0 records in 00:20:42.546 1+0 records out 00:20:42.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394968 s, 10.4 MB/s 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.546 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:42.806 /dev/nbd1 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.806 1+0 records in 00:20:42.806 1+0 records out 00:20:42.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617763 s, 6.6 MB/s 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.806 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:43.064 23:01:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:43.064 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.064 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.064 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.064 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:43.064 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.065 23:01:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.323 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78170 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78170 ']' 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78170 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78170 00:20:43.583 killing process with pid 78170 00:20:43.583 Received shutdown signal, test time was about 60.000000 seconds 00:20:43.583 00:20:43.583 Latency(us) 00:20:43.583 [2024-12-09T23:01:59.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.583 [2024-12-09T23:01:59.439Z] =================================================================================================================== 00:20:43.583 [2024-12-09T23:01:59.439Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78170' 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78170 00:20:43.583 [2024-12-09 23:01:59.412367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.583 23:01:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78170 00:20:44.165 [2024-12-09 23:01:59.979296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.544 ************************************ 00:20:45.544 END TEST raid_rebuild_test 00:20:45.544 ************************************ 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:45.544 00:20:45.544 real 0m19.018s 00:20:45.544 user 0m20.798s 00:20:45.544 sys 0m3.403s 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 23:02:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:45.544 23:02:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:45.544 23:02:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.544 23:02:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.544 ************************************ 00:20:45.544 START TEST raid_rebuild_test_sb 00:20:45.544 ************************************ 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78628 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78628 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78628 ']' 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.544 23:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.839 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:45.839 Zero copy mechanism will not be used. 00:20:45.839 [2024-12-09 23:02:01.456733] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:20:45.839 [2024-12-09 23:02:01.456870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78628 ] 00:20:45.839 [2024-12-09 23:02:01.624441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.120 [2024-12-09 23:02:01.788307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.379 [2024-12-09 23:02:02.015455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.379 [2024-12-09 23:02:02.015526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.637 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.637 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:46.637 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.637 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:46.637 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.638 BaseBdev1_malloc 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.638 [2024-12-09 23:02:02.442489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.638 [2024-12-09 23:02:02.442564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.638 [2024-12-09 23:02:02.442585] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:46.638 [2024-12-09 23:02:02.442599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.638 [2024-12-09 23:02:02.445088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.638 [2024-12-09 23:02:02.445232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.638 BaseBdev1 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.638 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 BaseBdev2_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 [2024-12-09 23:02:02.505639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:46.897 [2024-12-09 23:02:02.505795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.897 [2024-12-09 23:02:02.505820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:46.897 [2024-12-09 23:02:02.505832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.897 [2024-12-09 23:02:02.508363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.897 [2024-12-09 23:02:02.508481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:46.897 BaseBdev2 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 BaseBdev3_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 [2024-12-09 23:02:02.579924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:46.897 [2024-12-09 23:02:02.579990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.897 [2024-12-09 23:02:02.580012] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:46.897 [2024-12-09 23:02:02.580025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.897 [2024-12-09 23:02:02.582552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.897 [2024-12-09 23:02:02.582590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:46.897 BaseBdev3 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 BaseBdev4_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 [2024-12-09 23:02:02.643120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:46.897 [2024-12-09 23:02:02.643198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.897 [2024-12-09 23:02:02.643224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:46.897 [2024-12-09 23:02:02.643238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.897 [2024-12-09 23:02:02.645780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.897 [2024-12-09 23:02:02.645853] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:46.897 BaseBdev4 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 spare_malloc 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 spare_delay 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.897 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.897 [2024-12-09 23:02:02.717400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:46.897 [2024-12-09 23:02:02.717585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.897 [2024-12-09 23:02:02.717609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:46.897 [2024-12-09 23:02:02.717622] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.897 [2024-12-09 23:02:02.719973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.897 [2024-12-09 23:02:02.720012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:46.897 spare 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.898 [2024-12-09 23:02:02.729436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.898 [2024-12-09 23:02:02.731534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.898 [2024-12-09 23:02:02.731597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.898 [2024-12-09 23:02:02.731646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:46.898 [2024-12-09 23:02:02.731851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:46.898 [2024-12-09 23:02:02.731872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:46.898 [2024-12-09 23:02:02.732116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:46.898 [2024-12-09 23:02:02.732309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:46.898 [2024-12-09 23:02:02.732319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:46.898 [2024-12-09 23:02:02.732489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.898 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.156 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.156 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.156 "name": "raid_bdev1", 00:20:47.156 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:47.156 "strip_size_kb": 0, 00:20:47.156 "state": "online", 00:20:47.156 "raid_level": "raid1", 00:20:47.156 "superblock": true, 00:20:47.156 "num_base_bdevs": 4, 00:20:47.156 "num_base_bdevs_discovered": 4, 00:20:47.156 "num_base_bdevs_operational": 4, 00:20:47.156 "base_bdevs_list": [ 00:20:47.156 { 00:20:47.156 "name": "BaseBdev1", 00:20:47.156 "uuid": "f866455c-a38f-52f7-b45d-2a592b6766af", 00:20:47.156 "is_configured": true, 00:20:47.156 "data_offset": 2048, 00:20:47.156 "data_size": 63488 00:20:47.156 }, 00:20:47.156 { 00:20:47.156 "name": "BaseBdev2", 00:20:47.156 "uuid": "2f292639-dd77-5fdf-8ea0-457f3db9f481", 00:20:47.156 "is_configured": true, 00:20:47.156 "data_offset": 2048, 00:20:47.156 "data_size": 63488 00:20:47.156 }, 00:20:47.156 { 00:20:47.156 "name": "BaseBdev3", 00:20:47.156 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:47.156 "is_configured": true, 00:20:47.156 "data_offset": 2048, 00:20:47.156 "data_size": 63488 00:20:47.156 }, 00:20:47.156 { 00:20:47.156 "name": "BaseBdev4", 00:20:47.156 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:47.156 "is_configured": true, 00:20:47.156 "data_offset": 2048, 00:20:47.156 "data_size": 63488 00:20:47.156 } 00:20:47.156 ] 00:20:47.156 }' 00:20:47.156 23:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.156 23:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.416 [2024-12-09 23:02:03.213161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.416 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.417 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:47.417 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.681 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:47.681 [2024-12-09 23:02:03.500706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:47.681 /dev/nbd0 00:20:47.944 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.944 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.944 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:47.944 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:47.944 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.945 1+0 records in 00:20:47.945 1+0 records out 00:20:47.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497302 s, 8.2 MB/s 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:47.945 23:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:54.516 63488+0 records in 00:20:54.516 63488+0 records out 00:20:54.516 32505856 bytes (33 MB, 31 MiB) copied, 5.98159 s, 5.4 MB/s 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.516 [2024-12-09 23:02:09.786599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.516 [2024-12-09 23:02:09.803717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.516 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.516 "name": "raid_bdev1", 00:20:54.516 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:54.516 "strip_size_kb": 0, 00:20:54.516 "state": "online", 00:20:54.516 "raid_level": "raid1", 00:20:54.516 "superblock": true, 00:20:54.516 "num_base_bdevs": 4, 00:20:54.516 "num_base_bdevs_discovered": 3, 00:20:54.516 "num_base_bdevs_operational": 3, 00:20:54.516 "base_bdevs_list": [ 00:20:54.516 { 00:20:54.516 "name": null, 00:20:54.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.516 "is_configured": false, 00:20:54.516 "data_offset": 0, 00:20:54.516 "data_size": 63488 00:20:54.516 }, 00:20:54.516 { 00:20:54.516 "name": "BaseBdev2", 00:20:54.516 "uuid": "2f292639-dd77-5fdf-8ea0-457f3db9f481", 00:20:54.516 "is_configured": true, 00:20:54.516 "data_offset": 2048, 00:20:54.516 "data_size": 63488 00:20:54.516 }, 00:20:54.516 { 00:20:54.517 "name": "BaseBdev3", 00:20:54.517 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:54.517 "is_configured": true, 00:20:54.517 "data_offset": 2048, 00:20:54.517 "data_size": 63488 00:20:54.517 }, 00:20:54.517 { 00:20:54.517 "name": "BaseBdev4", 00:20:54.517 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:54.517 "is_configured": true, 00:20:54.517 "data_offset": 2048, 00:20:54.517 "data_size": 63488 00:20:54.517 } 00:20:54.517 ] 00:20:54.517 }' 00:20:54.517 23:02:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.517 23:02:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.517 23:02:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:54.517 23:02:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.517 23:02:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.517 [2024-12-09 23:02:10.271004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.517 [2024-12-09 23:02:10.288235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:20:54.517 23:02:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.517 23:02:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:54.517 [2024-12-09 23:02:10.290470] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.455 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.714 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.714 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.714 "name": "raid_bdev1", 00:20:55.714 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:55.714 "strip_size_kb": 0, 00:20:55.714 "state": "online", 00:20:55.714 "raid_level": "raid1", 00:20:55.714 "superblock": true, 00:20:55.714 "num_base_bdevs": 4, 00:20:55.714 "num_base_bdevs_discovered": 4, 00:20:55.715 "num_base_bdevs_operational": 4, 00:20:55.715 "process": { 00:20:55.715 "type": "rebuild", 00:20:55.715 "target": "spare", 00:20:55.715 "progress": { 00:20:55.715 "blocks": 20480, 00:20:55.715 "percent": 32 00:20:55.715 } 00:20:55.715 }, 00:20:55.715 "base_bdevs_list": [ 00:20:55.715 { 00:20:55.715 "name": "spare", 00:20:55.715 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:20:55.715 "is_configured": true, 00:20:55.715 "data_offset": 2048, 00:20:55.715 "data_size": 63488 00:20:55.715 }, 00:20:55.715 { 00:20:55.715 "name": "BaseBdev2", 00:20:55.715 "uuid": "2f292639-dd77-5fdf-8ea0-457f3db9f481", 00:20:55.715 "is_configured": true, 00:20:55.715 "data_offset": 2048, 00:20:55.715 "data_size": 63488 00:20:55.715 }, 00:20:55.715 { 00:20:55.715 "name": "BaseBdev3", 00:20:55.715 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:55.715 "is_configured": true, 00:20:55.715 "data_offset": 2048, 00:20:55.715 "data_size": 63488 00:20:55.715 }, 00:20:55.715 { 00:20:55.715 "name": "BaseBdev4", 00:20:55.715 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:55.715 "is_configured": true, 00:20:55.715 "data_offset": 2048, 00:20:55.715 "data_size": 63488 00:20:55.715 } 00:20:55.715 ] 00:20:55.715 }' 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.715 [2024-12-09 23:02:11.453514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.715 [2024-12-09 23:02:11.496406] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:55.715 [2024-12-09 23:02:11.496500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.715 [2024-12-09 23:02:11.496519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.715 [2024-12-09 23:02:11.496529] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.715 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.974 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.974 "name": "raid_bdev1", 00:20:55.974 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:55.974 "strip_size_kb": 0, 00:20:55.974 "state": "online", 00:20:55.974 "raid_level": "raid1", 00:20:55.974 "superblock": true, 00:20:55.974 "num_base_bdevs": 4, 00:20:55.974 "num_base_bdevs_discovered": 3, 00:20:55.974 "num_base_bdevs_operational": 3, 00:20:55.974 "base_bdevs_list": [ 00:20:55.974 { 00:20:55.974 "name": null, 00:20:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.974 "is_configured": false, 00:20:55.974 "data_offset": 0, 00:20:55.974 "data_size": 63488 00:20:55.974 }, 00:20:55.974 { 00:20:55.974 "name": "BaseBdev2", 00:20:55.974 "uuid": "2f292639-dd77-5fdf-8ea0-457f3db9f481", 00:20:55.974 "is_configured": true, 00:20:55.974 "data_offset": 2048, 00:20:55.974 "data_size": 63488 00:20:55.974 }, 00:20:55.974 { 00:20:55.974 "name": "BaseBdev3", 00:20:55.974 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:55.974 "is_configured": true, 00:20:55.974 "data_offset": 2048, 00:20:55.974 "data_size": 63488 00:20:55.974 }, 00:20:55.974 { 00:20:55.974 "name": "BaseBdev4", 00:20:55.974 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:55.974 "is_configured": true, 00:20:55.974 "data_offset": 2048, 00:20:55.974 "data_size": 63488 00:20:55.974 } 00:20:55.974 ] 00:20:55.974 }' 00:20:55.974 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.974 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.234 23:02:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.234 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.234 "name": "raid_bdev1", 00:20:56.234 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:56.234 "strip_size_kb": 0, 00:20:56.234 "state": "online", 00:20:56.234 "raid_level": "raid1", 00:20:56.234 "superblock": true, 00:20:56.234 "num_base_bdevs": 4, 00:20:56.234 "num_base_bdevs_discovered": 3, 00:20:56.234 "num_base_bdevs_operational": 3, 00:20:56.234 "base_bdevs_list": [ 00:20:56.234 { 00:20:56.234 "name": null, 00:20:56.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.234 "is_configured": false, 00:20:56.234 "data_offset": 0, 00:20:56.234 "data_size": 63488 00:20:56.234 }, 00:20:56.234 { 00:20:56.234 "name": "BaseBdev2", 00:20:56.234 "uuid": "2f292639-dd77-5fdf-8ea0-457f3db9f481", 00:20:56.234 "is_configured": true, 00:20:56.234 "data_offset": 2048, 00:20:56.234 "data_size": 63488 00:20:56.234 }, 00:20:56.234 { 00:20:56.234 "name": "BaseBdev3", 00:20:56.234 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:56.234 "is_configured": true, 00:20:56.234 "data_offset": 2048, 00:20:56.234 "data_size": 63488 00:20:56.234 }, 00:20:56.234 { 00:20:56.234 "name": "BaseBdev4", 00:20:56.234 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:56.234 "is_configured": true, 00:20:56.234 "data_offset": 2048, 00:20:56.234 "data_size": 63488 00:20:56.234 } 00:20:56.234 ] 00:20:56.234 }' 00:20:56.234 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.234 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:56.234 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.492 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:56.492 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:56.492 23:02:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.492 23:02:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.492 [2024-12-09 23:02:12.121861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.492 [2024-12-09 23:02:12.136384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:20:56.492 23:02:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.492 23:02:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:56.492 [2024-12-09 23:02:12.138432] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.427 "name": "raid_bdev1", 00:20:57.427 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:57.427 "strip_size_kb": 0, 00:20:57.427 "state": "online", 00:20:57.427 "raid_level": "raid1", 00:20:57.427 "superblock": true, 00:20:57.427 "num_base_bdevs": 4, 00:20:57.427 "num_base_bdevs_discovered": 4, 00:20:57.427 "num_base_bdevs_operational": 4, 00:20:57.427 "process": { 00:20:57.427 "type": "rebuild", 00:20:57.427 "target": "spare", 00:20:57.427 "progress": { 00:20:57.427 "blocks": 20480, 00:20:57.427 "percent": 32 00:20:57.427 } 00:20:57.427 }, 00:20:57.427 "base_bdevs_list": [ 00:20:57.427 { 00:20:57.427 "name": "spare", 00:20:57.427 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:20:57.427 "is_configured": true, 00:20:57.427 "data_offset": 2048, 00:20:57.427 "data_size": 63488 00:20:57.427 }, 00:20:57.427 { 00:20:57.427 "name": "BaseBdev2", 00:20:57.427 "uuid": "2f292639-dd77-5fdf-8ea0-457f3db9f481", 00:20:57.427 "is_configured": true, 00:20:57.427 "data_offset": 2048, 00:20:57.427 "data_size": 63488 00:20:57.427 }, 00:20:57.427 { 00:20:57.427 "name": "BaseBdev3", 00:20:57.427 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:57.427 "is_configured": true, 00:20:57.427 "data_offset": 2048, 00:20:57.427 "data_size": 63488 00:20:57.427 }, 00:20:57.427 { 00:20:57.427 "name": "BaseBdev4", 00:20:57.427 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:57.427 "is_configured": true, 00:20:57.427 "data_offset": 2048, 00:20:57.427 "data_size": 63488 00:20:57.427 } 00:20:57.427 ] 00:20:57.427 }' 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:57.427 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:57.427 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.778 [2024-12-09 23:02:13.289877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:57.778 [2024-12-09 23:02:13.444036] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.778 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.778 "name": "raid_bdev1", 00:20:57.778 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:57.778 "strip_size_kb": 0, 00:20:57.778 "state": "online", 00:20:57.778 "raid_level": "raid1", 00:20:57.778 "superblock": true, 00:20:57.778 "num_base_bdevs": 4, 00:20:57.778 "num_base_bdevs_discovered": 3, 00:20:57.778 "num_base_bdevs_operational": 3, 00:20:57.778 "process": { 00:20:57.778 "type": "rebuild", 00:20:57.778 "target": "spare", 00:20:57.778 "progress": { 00:20:57.778 "blocks": 24576, 00:20:57.778 "percent": 38 00:20:57.778 } 00:20:57.778 }, 00:20:57.778 "base_bdevs_list": [ 00:20:57.778 { 00:20:57.778 "name": "spare", 00:20:57.778 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:20:57.778 "is_configured": true, 00:20:57.778 "data_offset": 2048, 00:20:57.778 "data_size": 63488 00:20:57.778 }, 00:20:57.778 { 00:20:57.778 "name": null, 00:20:57.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.778 "is_configured": false, 00:20:57.778 "data_offset": 0, 00:20:57.778 "data_size": 63488 00:20:57.778 }, 00:20:57.778 { 00:20:57.778 "name": "BaseBdev3", 00:20:57.779 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:57.779 "is_configured": true, 00:20:57.779 "data_offset": 2048, 00:20:57.779 "data_size": 63488 00:20:57.779 }, 00:20:57.779 { 00:20:57.779 "name": "BaseBdev4", 00:20:57.779 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:57.779 "is_configured": true, 00:20:57.779 "data_offset": 2048, 00:20:57.779 "data_size": 63488 00:20:57.779 } 00:20:57.779 ] 00:20:57.779 }' 00:20:57.779 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.779 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.779 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=491 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.039 "name": "raid_bdev1", 00:20:58.039 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:58.039 "strip_size_kb": 0, 00:20:58.039 "state": "online", 00:20:58.039 "raid_level": "raid1", 00:20:58.039 "superblock": true, 00:20:58.039 "num_base_bdevs": 4, 00:20:58.039 "num_base_bdevs_discovered": 3, 00:20:58.039 "num_base_bdevs_operational": 3, 00:20:58.039 "process": { 00:20:58.039 "type": "rebuild", 00:20:58.039 "target": "spare", 00:20:58.039 "progress": { 00:20:58.039 "blocks": 26624, 00:20:58.039 "percent": 41 00:20:58.039 } 00:20:58.039 }, 00:20:58.039 "base_bdevs_list": [ 00:20:58.039 { 00:20:58.039 "name": "spare", 00:20:58.039 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:20:58.039 "is_configured": true, 00:20:58.039 "data_offset": 2048, 00:20:58.039 "data_size": 63488 00:20:58.039 }, 00:20:58.039 { 00:20:58.039 "name": null, 00:20:58.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.039 "is_configured": false, 00:20:58.039 "data_offset": 0, 00:20:58.039 "data_size": 63488 00:20:58.039 }, 00:20:58.039 { 00:20:58.039 "name": "BaseBdev3", 00:20:58.039 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:58.039 "is_configured": true, 00:20:58.039 "data_offset": 2048, 00:20:58.039 "data_size": 63488 00:20:58.039 }, 00:20:58.039 { 00:20:58.039 "name": "BaseBdev4", 00:20:58.039 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:58.039 "is_configured": true, 00:20:58.039 "data_offset": 2048, 00:20:58.039 "data_size": 63488 00:20:58.039 } 00:20:58.039 ] 00:20:58.039 }' 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.039 23:02:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.972 "name": "raid_bdev1", 00:20:58.972 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:20:58.972 "strip_size_kb": 0, 00:20:58.972 "state": "online", 00:20:58.972 "raid_level": "raid1", 00:20:58.972 "superblock": true, 00:20:58.972 "num_base_bdevs": 4, 00:20:58.972 "num_base_bdevs_discovered": 3, 00:20:58.972 "num_base_bdevs_operational": 3, 00:20:58.972 "process": { 00:20:58.972 "type": "rebuild", 00:20:58.972 "target": "spare", 00:20:58.972 "progress": { 00:20:58.972 "blocks": 51200, 00:20:58.972 "percent": 80 00:20:58.972 } 00:20:58.972 }, 00:20:58.972 "base_bdevs_list": [ 00:20:58.972 { 00:20:58.972 "name": "spare", 00:20:58.972 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:20:58.972 "is_configured": true, 00:20:58.972 "data_offset": 2048, 00:20:58.972 "data_size": 63488 00:20:58.972 }, 00:20:58.972 { 00:20:58.972 "name": null, 00:20:58.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.972 "is_configured": false, 00:20:58.972 "data_offset": 0, 00:20:58.972 "data_size": 63488 00:20:58.972 }, 00:20:58.972 { 00:20:58.972 "name": "BaseBdev3", 00:20:58.972 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:20:58.972 "is_configured": true, 00:20:58.972 "data_offset": 2048, 00:20:58.972 "data_size": 63488 00:20:58.972 }, 00:20:58.972 { 00:20:58.972 "name": "BaseBdev4", 00:20:58.972 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:20:58.972 "is_configured": true, 00:20:58.972 "data_offset": 2048, 00:20:58.972 "data_size": 63488 00:20:58.972 } 00:20:58.972 ] 00:20:58.972 }' 00:20:58.972 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.230 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.230 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.230 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.230 23:02:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.797 [2024-12-09 23:02:15.353396] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:59.797 [2024-12-09 23:02:15.353508] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:59.797 [2024-12-09 23:02:15.353670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.056 23:02:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.315 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.315 "name": "raid_bdev1", 00:21:00.315 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:00.315 "strip_size_kb": 0, 00:21:00.315 "state": "online", 00:21:00.315 "raid_level": "raid1", 00:21:00.315 "superblock": true, 00:21:00.315 "num_base_bdevs": 4, 00:21:00.315 "num_base_bdevs_discovered": 3, 00:21:00.315 "num_base_bdevs_operational": 3, 00:21:00.315 "base_bdevs_list": [ 00:21:00.315 { 00:21:00.315 "name": "spare", 00:21:00.315 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 2048, 00:21:00.315 "data_size": 63488 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": null, 00:21:00.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.315 "is_configured": false, 00:21:00.315 "data_offset": 0, 00:21:00.315 "data_size": 63488 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": "BaseBdev3", 00:21:00.315 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 2048, 00:21:00.315 "data_size": 63488 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": "BaseBdev4", 00:21:00.315 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 2048, 00:21:00.315 "data_size": 63488 00:21:00.315 } 00:21:00.315 ] 00:21:00.315 }' 00:21:00.315 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.315 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:00.315 23:02:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.315 "name": "raid_bdev1", 00:21:00.315 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:00.315 "strip_size_kb": 0, 00:21:00.315 "state": "online", 00:21:00.315 "raid_level": "raid1", 00:21:00.315 "superblock": true, 00:21:00.315 "num_base_bdevs": 4, 00:21:00.315 "num_base_bdevs_discovered": 3, 00:21:00.315 "num_base_bdevs_operational": 3, 00:21:00.315 "base_bdevs_list": [ 00:21:00.315 { 00:21:00.315 "name": "spare", 00:21:00.315 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 2048, 00:21:00.315 "data_size": 63488 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": null, 00:21:00.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.315 "is_configured": false, 00:21:00.315 "data_offset": 0, 00:21:00.315 "data_size": 63488 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": "BaseBdev3", 00:21:00.315 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 2048, 00:21:00.315 "data_size": 63488 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": "BaseBdev4", 00:21:00.315 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 2048, 00:21:00.315 "data_size": 63488 00:21:00.315 } 00:21:00.315 ] 00:21:00.315 }' 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:00.315 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.573 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.573 "name": "raid_bdev1", 00:21:00.573 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:00.573 "strip_size_kb": 0, 00:21:00.573 "state": "online", 00:21:00.573 "raid_level": "raid1", 00:21:00.573 "superblock": true, 00:21:00.573 "num_base_bdevs": 4, 00:21:00.573 "num_base_bdevs_discovered": 3, 00:21:00.573 "num_base_bdevs_operational": 3, 00:21:00.573 "base_bdevs_list": [ 00:21:00.573 { 00:21:00.573 "name": "spare", 00:21:00.573 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:00.573 "is_configured": true, 00:21:00.573 "data_offset": 2048, 00:21:00.573 "data_size": 63488 00:21:00.573 }, 00:21:00.573 { 00:21:00.573 "name": null, 00:21:00.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.573 "is_configured": false, 00:21:00.573 "data_offset": 0, 00:21:00.573 "data_size": 63488 00:21:00.573 }, 00:21:00.573 { 00:21:00.573 "name": "BaseBdev3", 00:21:00.573 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:00.573 "is_configured": true, 00:21:00.573 "data_offset": 2048, 00:21:00.573 "data_size": 63488 00:21:00.573 }, 00:21:00.573 { 00:21:00.573 "name": "BaseBdev4", 00:21:00.573 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:00.573 "is_configured": true, 00:21:00.573 "data_offset": 2048, 00:21:00.573 "data_size": 63488 00:21:00.573 } 00:21:00.573 ] 00:21:00.573 }' 00:21:00.574 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.574 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.832 [2024-12-09 23:02:16.617087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.832 [2024-12-09 23:02:16.617123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.832 [2024-12-09 23:02:16.617241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.832 [2024-12-09 23:02:16.617332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.832 [2024-12-09 23:02:16.617344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.832 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:01.140 /dev/nbd0 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.140 1+0 records in 00:21:01.140 1+0 records out 00:21:01.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345679 s, 11.8 MB/s 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.140 23:02:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:01.401 /dev/nbd1 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.401 1+0 records in 00:21:01.401 1+0 records out 00:21:01.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313682 s, 13.1 MB/s 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.401 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.659 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.916 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.174 [2024-12-09 23:02:17.877702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:02.174 [2024-12-09 23:02:17.877810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.174 [2024-12-09 23:02:17.877863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:02.174 [2024-12-09 23:02:17.877899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.174 [2024-12-09 23:02:17.880150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.174 [2024-12-09 23:02:17.880223] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:02.174 [2024-12-09 23:02:17.880353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:02.174 [2024-12-09 23:02:17.880430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.174 [2024-12-09 23:02:17.880623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.174 [2024-12-09 23:02:17.880758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:02.174 spare 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.174 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.174 [2024-12-09 23:02:17.980705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:02.174 [2024-12-09 23:02:17.980845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:02.174 [2024-12-09 23:02:17.981257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:21:02.174 [2024-12-09 23:02:17.981558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:02.174 [2024-12-09 23:02:17.981613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:02.174 [2024-12-09 23:02:17.981871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.175 23:02:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.175 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.434 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.434 "name": "raid_bdev1", 00:21:02.434 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:02.434 "strip_size_kb": 0, 00:21:02.434 "state": "online", 00:21:02.434 "raid_level": "raid1", 00:21:02.434 "superblock": true, 00:21:02.434 "num_base_bdevs": 4, 00:21:02.434 "num_base_bdevs_discovered": 3, 00:21:02.434 "num_base_bdevs_operational": 3, 00:21:02.434 "base_bdevs_list": [ 00:21:02.434 { 00:21:02.434 "name": "spare", 00:21:02.434 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:02.434 "is_configured": true, 00:21:02.434 "data_offset": 2048, 00:21:02.434 "data_size": 63488 00:21:02.434 }, 00:21:02.434 { 00:21:02.434 "name": null, 00:21:02.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.434 "is_configured": false, 00:21:02.434 "data_offset": 2048, 00:21:02.434 "data_size": 63488 00:21:02.434 }, 00:21:02.434 { 00:21:02.434 "name": "BaseBdev3", 00:21:02.434 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:02.434 "is_configured": true, 00:21:02.434 "data_offset": 2048, 00:21:02.434 "data_size": 63488 00:21:02.434 }, 00:21:02.434 { 00:21:02.434 "name": "BaseBdev4", 00:21:02.434 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:02.434 "is_configured": true, 00:21:02.434 "data_offset": 2048, 00:21:02.434 "data_size": 63488 00:21:02.434 } 00:21:02.434 ] 00:21:02.434 }' 00:21:02.434 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.434 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.693 "name": "raid_bdev1", 00:21:02.693 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:02.693 "strip_size_kb": 0, 00:21:02.693 "state": "online", 00:21:02.693 "raid_level": "raid1", 00:21:02.693 "superblock": true, 00:21:02.693 "num_base_bdevs": 4, 00:21:02.693 "num_base_bdevs_discovered": 3, 00:21:02.693 "num_base_bdevs_operational": 3, 00:21:02.693 "base_bdevs_list": [ 00:21:02.693 { 00:21:02.693 "name": "spare", 00:21:02.693 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:02.693 "is_configured": true, 00:21:02.693 "data_offset": 2048, 00:21:02.693 "data_size": 63488 00:21:02.693 }, 00:21:02.693 { 00:21:02.693 "name": null, 00:21:02.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.693 "is_configured": false, 00:21:02.693 "data_offset": 2048, 00:21:02.693 "data_size": 63488 00:21:02.693 }, 00:21:02.693 { 00:21:02.693 "name": "BaseBdev3", 00:21:02.693 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:02.693 "is_configured": true, 00:21:02.693 "data_offset": 2048, 00:21:02.693 "data_size": 63488 00:21:02.693 }, 00:21:02.693 { 00:21:02.693 "name": "BaseBdev4", 00:21:02.693 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:02.693 "is_configured": true, 00:21:02.693 "data_offset": 2048, 00:21:02.693 "data_size": 63488 00:21:02.693 } 00:21:02.693 ] 00:21:02.693 }' 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:02.693 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.951 [2024-12-09 23:02:18.644760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.951 "name": "raid_bdev1", 00:21:02.951 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:02.951 "strip_size_kb": 0, 00:21:02.951 "state": "online", 00:21:02.951 "raid_level": "raid1", 00:21:02.951 "superblock": true, 00:21:02.951 "num_base_bdevs": 4, 00:21:02.951 "num_base_bdevs_discovered": 2, 00:21:02.951 "num_base_bdevs_operational": 2, 00:21:02.951 "base_bdevs_list": [ 00:21:02.951 { 00:21:02.951 "name": null, 00:21:02.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.951 "is_configured": false, 00:21:02.951 "data_offset": 0, 00:21:02.951 "data_size": 63488 00:21:02.951 }, 00:21:02.951 { 00:21:02.951 "name": null, 00:21:02.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.951 "is_configured": false, 00:21:02.951 "data_offset": 2048, 00:21:02.951 "data_size": 63488 00:21:02.951 }, 00:21:02.951 { 00:21:02.951 "name": "BaseBdev3", 00:21:02.951 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:02.951 "is_configured": true, 00:21:02.951 "data_offset": 2048, 00:21:02.951 "data_size": 63488 00:21:02.951 }, 00:21:02.951 { 00:21:02.951 "name": "BaseBdev4", 00:21:02.951 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:02.951 "is_configured": true, 00:21:02.951 "data_offset": 2048, 00:21:02.951 "data_size": 63488 00:21:02.951 } 00:21:02.951 ] 00:21:02.951 }' 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.951 23:02:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.519 23:02:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:03.519 23:02:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.519 23:02:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.519 [2024-12-09 23:02:19.140639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.519 [2024-12-09 23:02:19.140916] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:03.519 [2024-12-09 23:02:19.140984] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:03.519 [2024-12-09 23:02:19.141057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.519 [2024-12-09 23:02:19.156103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:21:03.519 23:02:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.519 23:02:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:03.519 [2024-12-09 23:02:19.158096] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.453 "name": "raid_bdev1", 00:21:04.453 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:04.453 "strip_size_kb": 0, 00:21:04.453 "state": "online", 00:21:04.453 "raid_level": "raid1", 00:21:04.453 "superblock": true, 00:21:04.453 "num_base_bdevs": 4, 00:21:04.453 "num_base_bdevs_discovered": 3, 00:21:04.453 "num_base_bdevs_operational": 3, 00:21:04.453 "process": { 00:21:04.453 "type": "rebuild", 00:21:04.453 "target": "spare", 00:21:04.453 "progress": { 00:21:04.453 "blocks": 20480, 00:21:04.453 "percent": 32 00:21:04.453 } 00:21:04.453 }, 00:21:04.453 "base_bdevs_list": [ 00:21:04.453 { 00:21:04.453 "name": "spare", 00:21:04.453 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:04.453 "is_configured": true, 00:21:04.453 "data_offset": 2048, 00:21:04.453 "data_size": 63488 00:21:04.453 }, 00:21:04.453 { 00:21:04.453 "name": null, 00:21:04.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.453 "is_configured": false, 00:21:04.453 "data_offset": 2048, 00:21:04.453 "data_size": 63488 00:21:04.453 }, 00:21:04.453 { 00:21:04.453 "name": "BaseBdev3", 00:21:04.453 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:04.453 "is_configured": true, 00:21:04.453 "data_offset": 2048, 00:21:04.453 "data_size": 63488 00:21:04.453 }, 00:21:04.453 { 00:21:04.453 "name": "BaseBdev4", 00:21:04.453 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:04.453 "is_configured": true, 00:21:04.453 "data_offset": 2048, 00:21:04.453 "data_size": 63488 00:21:04.453 } 00:21:04.453 ] 00:21:04.453 }' 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.453 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 [2024-12-09 23:02:20.305557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.717 [2024-12-09 23:02:20.363783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:04.717 [2024-12-09 23:02:20.363950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.717 [2024-12-09 23:02:20.363976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.717 [2024-12-09 23:02:20.363984] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.717 "name": "raid_bdev1", 00:21:04.717 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:04.717 "strip_size_kb": 0, 00:21:04.717 "state": "online", 00:21:04.717 "raid_level": "raid1", 00:21:04.717 "superblock": true, 00:21:04.717 "num_base_bdevs": 4, 00:21:04.717 "num_base_bdevs_discovered": 2, 00:21:04.717 "num_base_bdevs_operational": 2, 00:21:04.717 "base_bdevs_list": [ 00:21:04.717 { 00:21:04.717 "name": null, 00:21:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.717 "is_configured": false, 00:21:04.717 "data_offset": 0, 00:21:04.717 "data_size": 63488 00:21:04.717 }, 00:21:04.717 { 00:21:04.717 "name": null, 00:21:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.717 "is_configured": false, 00:21:04.717 "data_offset": 2048, 00:21:04.717 "data_size": 63488 00:21:04.717 }, 00:21:04.717 { 00:21:04.717 "name": "BaseBdev3", 00:21:04.717 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:04.717 "is_configured": true, 00:21:04.717 "data_offset": 2048, 00:21:04.717 "data_size": 63488 00:21:04.717 }, 00:21:04.717 { 00:21:04.717 "name": "BaseBdev4", 00:21:04.717 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:04.717 "is_configured": true, 00:21:04.717 "data_offset": 2048, 00:21:04.717 "data_size": 63488 00:21:04.717 } 00:21:04.717 ] 00:21:04.717 }' 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.717 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.987 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:04.987 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.987 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.987 [2024-12-09 23:02:20.838926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:04.987 [2024-12-09 23:02:20.839046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.987 [2024-12-09 23:02:20.839089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:04.987 [2024-12-09 23:02:20.839146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.987 [2024-12-09 23:02:20.839697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.987 [2024-12-09 23:02:20.839754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:04.987 [2024-12-09 23:02:20.839884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:04.987 [2024-12-09 23:02:20.839924] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:04.987 [2024-12-09 23:02:20.839971] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:04.987 [2024-12-09 23:02:20.840062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.244 [2024-12-09 23:02:20.854890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:21:05.244 spare 00:21:05.244 23:02:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.244 [2024-12-09 23:02:20.856904] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.244 23:02:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.179 "name": "raid_bdev1", 00:21:06.179 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:06.179 "strip_size_kb": 0, 00:21:06.179 "state": "online", 00:21:06.179 "raid_level": "raid1", 00:21:06.179 "superblock": true, 00:21:06.179 "num_base_bdevs": 4, 00:21:06.179 "num_base_bdevs_discovered": 3, 00:21:06.179 "num_base_bdevs_operational": 3, 00:21:06.179 "process": { 00:21:06.179 "type": "rebuild", 00:21:06.179 "target": "spare", 00:21:06.179 "progress": { 00:21:06.179 "blocks": 20480, 00:21:06.179 "percent": 32 00:21:06.179 } 00:21:06.179 }, 00:21:06.179 "base_bdevs_list": [ 00:21:06.179 { 00:21:06.179 "name": "spare", 00:21:06.179 "uuid": "20a78aa8-d8fb-5971-b280-0e2a415c2662", 00:21:06.179 "is_configured": true, 00:21:06.179 "data_offset": 2048, 00:21:06.179 "data_size": 63488 00:21:06.179 }, 00:21:06.179 { 00:21:06.179 "name": null, 00:21:06.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.179 "is_configured": false, 00:21:06.179 "data_offset": 2048, 00:21:06.179 "data_size": 63488 00:21:06.179 }, 00:21:06.179 { 00:21:06.179 "name": "BaseBdev3", 00:21:06.179 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:06.179 "is_configured": true, 00:21:06.179 "data_offset": 2048, 00:21:06.179 "data_size": 63488 00:21:06.179 }, 00:21:06.179 { 00:21:06.179 "name": "BaseBdev4", 00:21:06.179 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:06.179 "is_configured": true, 00:21:06.179 "data_offset": 2048, 00:21:06.179 "data_size": 63488 00:21:06.179 } 00:21:06.179 ] 00:21:06.179 }' 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.179 23:02:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.179 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.179 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.179 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.179 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.179 [2024-12-09 23:02:22.024676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.438 [2024-12-09 23:02:22.062684] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.438 [2024-12-09 23:02:22.062742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.438 [2024-12-09 23:02:22.062758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.438 [2024-12-09 23:02:22.062766] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.438 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.438 "name": "raid_bdev1", 00:21:06.438 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:06.438 "strip_size_kb": 0, 00:21:06.438 "state": "online", 00:21:06.438 "raid_level": "raid1", 00:21:06.438 "superblock": true, 00:21:06.438 "num_base_bdevs": 4, 00:21:06.438 "num_base_bdevs_discovered": 2, 00:21:06.438 "num_base_bdevs_operational": 2, 00:21:06.438 "base_bdevs_list": [ 00:21:06.438 { 00:21:06.438 "name": null, 00:21:06.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.438 "is_configured": false, 00:21:06.438 "data_offset": 0, 00:21:06.438 "data_size": 63488 00:21:06.439 }, 00:21:06.439 { 00:21:06.439 "name": null, 00:21:06.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.439 "is_configured": false, 00:21:06.439 "data_offset": 2048, 00:21:06.439 "data_size": 63488 00:21:06.439 }, 00:21:06.439 { 00:21:06.439 "name": "BaseBdev3", 00:21:06.439 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:06.439 "is_configured": true, 00:21:06.439 "data_offset": 2048, 00:21:06.439 "data_size": 63488 00:21:06.439 }, 00:21:06.439 { 00:21:06.439 "name": "BaseBdev4", 00:21:06.439 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:06.439 "is_configured": true, 00:21:06.439 "data_offset": 2048, 00:21:06.439 "data_size": 63488 00:21:06.439 } 00:21:06.439 ] 00:21:06.439 }' 00:21:06.439 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.439 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.697 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.955 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.955 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.955 "name": "raid_bdev1", 00:21:06.955 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:06.955 "strip_size_kb": 0, 00:21:06.955 "state": "online", 00:21:06.955 "raid_level": "raid1", 00:21:06.955 "superblock": true, 00:21:06.955 "num_base_bdevs": 4, 00:21:06.955 "num_base_bdevs_discovered": 2, 00:21:06.955 "num_base_bdevs_operational": 2, 00:21:06.955 "base_bdevs_list": [ 00:21:06.955 { 00:21:06.955 "name": null, 00:21:06.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.955 "is_configured": false, 00:21:06.955 "data_offset": 0, 00:21:06.956 "data_size": 63488 00:21:06.956 }, 00:21:06.956 { 00:21:06.956 "name": null, 00:21:06.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.956 "is_configured": false, 00:21:06.956 "data_offset": 2048, 00:21:06.956 "data_size": 63488 00:21:06.956 }, 00:21:06.956 { 00:21:06.956 "name": "BaseBdev3", 00:21:06.956 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:06.956 "is_configured": true, 00:21:06.956 "data_offset": 2048, 00:21:06.956 "data_size": 63488 00:21:06.956 }, 00:21:06.956 { 00:21:06.956 "name": "BaseBdev4", 00:21:06.956 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:06.956 "is_configured": true, 00:21:06.956 "data_offset": 2048, 00:21:06.956 "data_size": 63488 00:21:06.956 } 00:21:06.956 ] 00:21:06.956 }' 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.956 [2024-12-09 23:02:22.696320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:06.956 [2024-12-09 23:02:22.696386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.956 [2024-12-09 23:02:22.696407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:06.956 [2024-12-09 23:02:22.696418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.956 [2024-12-09 23:02:22.696941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.956 [2024-12-09 23:02:22.696965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.956 [2024-12-09 23:02:22.697056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:06.956 [2024-12-09 23:02:22.697074] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:06.956 [2024-12-09 23:02:22.697083] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:06.956 [2024-12-09 23:02:22.697108] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:06.956 BaseBdev1 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.956 23:02:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.893 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.894 23:02:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.153 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.153 "name": "raid_bdev1", 00:21:08.153 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:08.153 "strip_size_kb": 0, 00:21:08.153 "state": "online", 00:21:08.153 "raid_level": "raid1", 00:21:08.153 "superblock": true, 00:21:08.153 "num_base_bdevs": 4, 00:21:08.153 "num_base_bdevs_discovered": 2, 00:21:08.153 "num_base_bdevs_operational": 2, 00:21:08.153 "base_bdevs_list": [ 00:21:08.153 { 00:21:08.153 "name": null, 00:21:08.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.153 "is_configured": false, 00:21:08.153 "data_offset": 0, 00:21:08.153 "data_size": 63488 00:21:08.153 }, 00:21:08.153 { 00:21:08.153 "name": null, 00:21:08.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.154 "is_configured": false, 00:21:08.154 "data_offset": 2048, 00:21:08.154 "data_size": 63488 00:21:08.154 }, 00:21:08.154 { 00:21:08.154 "name": "BaseBdev3", 00:21:08.154 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:08.154 "is_configured": true, 00:21:08.154 "data_offset": 2048, 00:21:08.154 "data_size": 63488 00:21:08.154 }, 00:21:08.154 { 00:21:08.154 "name": "BaseBdev4", 00:21:08.154 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:08.154 "is_configured": true, 00:21:08.154 "data_offset": 2048, 00:21:08.154 "data_size": 63488 00:21:08.154 } 00:21:08.154 ] 00:21:08.154 }' 00:21:08.154 23:02:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.154 23:02:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.415 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.415 "name": "raid_bdev1", 00:21:08.415 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:08.415 "strip_size_kb": 0, 00:21:08.415 "state": "online", 00:21:08.415 "raid_level": "raid1", 00:21:08.415 "superblock": true, 00:21:08.415 "num_base_bdevs": 4, 00:21:08.416 "num_base_bdevs_discovered": 2, 00:21:08.416 "num_base_bdevs_operational": 2, 00:21:08.416 "base_bdevs_list": [ 00:21:08.416 { 00:21:08.416 "name": null, 00:21:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.416 "is_configured": false, 00:21:08.416 "data_offset": 0, 00:21:08.416 "data_size": 63488 00:21:08.416 }, 00:21:08.416 { 00:21:08.416 "name": null, 00:21:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.416 "is_configured": false, 00:21:08.416 "data_offset": 2048, 00:21:08.416 "data_size": 63488 00:21:08.416 }, 00:21:08.416 { 00:21:08.416 "name": "BaseBdev3", 00:21:08.416 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:08.416 "is_configured": true, 00:21:08.416 "data_offset": 2048, 00:21:08.416 "data_size": 63488 00:21:08.416 }, 00:21:08.416 { 00:21:08.416 "name": "BaseBdev4", 00:21:08.416 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:08.416 "is_configured": true, 00:21:08.416 "data_offset": 2048, 00:21:08.416 "data_size": 63488 00:21:08.416 } 00:21:08.416 ] 00:21:08.416 }' 00:21:08.416 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.676 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.676 [2024-12-09 23:02:24.353539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.676 [2024-12-09 23:02:24.353751] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:08.676 [2024-12-09 23:02:24.353765] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:08.676 request: 00:21:08.676 { 00:21:08.676 "base_bdev": "BaseBdev1", 00:21:08.676 "raid_bdev": "raid_bdev1", 00:21:08.677 "method": "bdev_raid_add_base_bdev", 00:21:08.677 "req_id": 1 00:21:08.677 } 00:21:08.677 Got JSON-RPC error response 00:21:08.677 response: 00:21:08.677 { 00:21:08.677 "code": -22, 00:21:08.677 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:08.677 } 00:21:08.677 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:08.677 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:08.677 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:08.677 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:08.677 23:02:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:08.677 23:02:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.611 "name": "raid_bdev1", 00:21:09.611 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:09.611 "strip_size_kb": 0, 00:21:09.611 "state": "online", 00:21:09.611 "raid_level": "raid1", 00:21:09.611 "superblock": true, 00:21:09.611 "num_base_bdevs": 4, 00:21:09.611 "num_base_bdevs_discovered": 2, 00:21:09.611 "num_base_bdevs_operational": 2, 00:21:09.611 "base_bdevs_list": [ 00:21:09.611 { 00:21:09.611 "name": null, 00:21:09.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.611 "is_configured": false, 00:21:09.611 "data_offset": 0, 00:21:09.611 "data_size": 63488 00:21:09.611 }, 00:21:09.611 { 00:21:09.611 "name": null, 00:21:09.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.611 "is_configured": false, 00:21:09.611 "data_offset": 2048, 00:21:09.611 "data_size": 63488 00:21:09.611 }, 00:21:09.611 { 00:21:09.611 "name": "BaseBdev3", 00:21:09.611 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:09.611 "is_configured": true, 00:21:09.611 "data_offset": 2048, 00:21:09.611 "data_size": 63488 00:21:09.611 }, 00:21:09.611 { 00:21:09.611 "name": "BaseBdev4", 00:21:09.611 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:09.611 "is_configured": true, 00:21:09.611 "data_offset": 2048, 00:21:09.611 "data_size": 63488 00:21:09.611 } 00:21:09.611 ] 00:21:09.611 }' 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.611 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.178 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.178 "name": "raid_bdev1", 00:21:10.178 "uuid": "0bbf4915-a4d3-4848-a75c-b50bc8293d56", 00:21:10.178 "strip_size_kb": 0, 00:21:10.178 "state": "online", 00:21:10.178 "raid_level": "raid1", 00:21:10.178 "superblock": true, 00:21:10.178 "num_base_bdevs": 4, 00:21:10.178 "num_base_bdevs_discovered": 2, 00:21:10.178 "num_base_bdevs_operational": 2, 00:21:10.178 "base_bdevs_list": [ 00:21:10.178 { 00:21:10.178 "name": null, 00:21:10.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.178 "is_configured": false, 00:21:10.178 "data_offset": 0, 00:21:10.178 "data_size": 63488 00:21:10.178 }, 00:21:10.178 { 00:21:10.178 "name": null, 00:21:10.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.178 "is_configured": false, 00:21:10.179 "data_offset": 2048, 00:21:10.179 "data_size": 63488 00:21:10.179 }, 00:21:10.179 { 00:21:10.179 "name": "BaseBdev3", 00:21:10.179 "uuid": "5ad49e48-80ff-521d-b74c-b2a8c7b674f1", 00:21:10.179 "is_configured": true, 00:21:10.179 "data_offset": 2048, 00:21:10.179 "data_size": 63488 00:21:10.179 }, 00:21:10.179 { 00:21:10.179 "name": "BaseBdev4", 00:21:10.179 "uuid": "fea6e9df-d647-5814-9643-c094491dd8c5", 00:21:10.179 "is_configured": true, 00:21:10.179 "data_offset": 2048, 00:21:10.179 "data_size": 63488 00:21:10.179 } 00:21:10.179 ] 00:21:10.179 }' 00:21:10.179 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.179 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.179 23:02:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78628 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78628 ']' 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78628 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.179 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78628 00:21:10.438 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.438 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.438 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78628' 00:21:10.438 killing process with pid 78628 00:21:10.438 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78628 00:21:10.438 Received shutdown signal, test time was about 60.000000 seconds 00:21:10.438 00:21:10.438 Latency(us) 00:21:10.438 [2024-12-09T23:02:26.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.438 [2024-12-09T23:02:26.294Z] =================================================================================================================== 00:21:10.438 [2024-12-09T23:02:26.294Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:10.438 23:02:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78628 00:21:10.438 [2024-12-09 23:02:26.049825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.438 [2024-12-09 23:02:26.049984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.438 [2024-12-09 23:02:26.050092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.438 [2024-12-09 23:02:26.050108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:11.008 [2024-12-09 23:02:26.571048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:11.963 00:21:11.963 real 0m26.386s 00:21:11.963 user 0m32.058s 00:21:11.963 sys 0m3.863s 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.963 ************************************ 00:21:11.963 END TEST raid_rebuild_test_sb 00:21:11.963 ************************************ 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.963 23:02:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:21:11.963 23:02:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:11.963 23:02:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.963 23:02:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:11.963 ************************************ 00:21:11.963 START TEST raid_rebuild_test_io 00:21:11.963 ************************************ 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:11.963 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79394 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79394 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79394 ']' 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.232 23:02:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:12.232 [2024-12-09 23:02:27.909856] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:21:12.232 [2024-12-09 23:02:27.910070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:21:12.232 Zero copy mechanism will not be used. 00:21:12.232 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79394 ] 00:21:12.232 [2024-12-09 23:02:28.084599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.503 [2024-12-09 23:02:28.201570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.775 [2024-12-09 23:02:28.413906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.775 [2024-12-09 23:02:28.413997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 BaseBdev1_malloc 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 [2024-12-09 23:02:28.806151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:13.037 [2024-12-09 23:02:28.806215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.037 [2024-12-09 23:02:28.806255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:13.037 [2024-12-09 23:02:28.806267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.037 [2024-12-09 23:02:28.808592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.037 [2024-12-09 23:02:28.808669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:13.037 BaseBdev1 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 BaseBdev2_malloc 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.037 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.038 [2024-12-09 23:02:28.862935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:13.038 [2024-12-09 23:02:28.862994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.038 [2024-12-09 23:02:28.863014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:13.038 [2024-12-09 23:02:28.863026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.038 [2024-12-09 23:02:28.865111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.038 [2024-12-09 23:02:28.865152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:13.038 BaseBdev2 00:21:13.038 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.038 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:13.038 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:13.038 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.038 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 BaseBdev3_malloc 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 [2024-12-09 23:02:28.935733] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:13.297 [2024-12-09 23:02:28.935791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.297 [2024-12-09 23:02:28.935815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:13.297 [2024-12-09 23:02:28.935825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.297 [2024-12-09 23:02:28.938185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.297 [2024-12-09 23:02:28.938265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:13.297 BaseBdev3 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 BaseBdev4_malloc 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 [2024-12-09 23:02:28.990120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:13.297 [2024-12-09 23:02:28.990189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.297 [2024-12-09 23:02:28.990211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:13.297 [2024-12-09 23:02:28.990222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.297 [2024-12-09 23:02:28.992346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.297 [2024-12-09 23:02:28.992390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:13.297 BaseBdev4 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 spare_malloc 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 spare_delay 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 [2024-12-09 23:02:29.057259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.297 [2024-12-09 23:02:29.057316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.297 [2024-12-09 23:02:29.057337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:13.297 [2024-12-09 23:02:29.057347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.297 [2024-12-09 23:02:29.059623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.297 [2024-12-09 23:02:29.059660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.297 spare 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 [2024-12-09 23:02:29.065298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.297 [2024-12-09 23:02:29.067306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.297 [2024-12-09 23:02:29.067376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:13.297 [2024-12-09 23:02:29.067435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:13.297 [2024-12-09 23:02:29.067549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:13.297 [2024-12-09 23:02:29.067567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:13.297 [2024-12-09 23:02:29.067847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:13.297 [2024-12-09 23:02:29.068040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:13.297 [2024-12-09 23:02:29.068054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:13.297 [2024-12-09 23:02:29.068212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.297 "name": "raid_bdev1", 00:21:13.297 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:13.297 "strip_size_kb": 0, 00:21:13.297 "state": "online", 00:21:13.297 "raid_level": "raid1", 00:21:13.297 "superblock": false, 00:21:13.297 "num_base_bdevs": 4, 00:21:13.297 "num_base_bdevs_discovered": 4, 00:21:13.297 "num_base_bdevs_operational": 4, 00:21:13.297 "base_bdevs_list": [ 00:21:13.297 { 00:21:13.297 "name": "BaseBdev1", 00:21:13.297 "uuid": "663dd242-a658-5e37-aa58-950e98deda69", 00:21:13.297 "is_configured": true, 00:21:13.297 "data_offset": 0, 00:21:13.297 "data_size": 65536 00:21:13.297 }, 00:21:13.297 { 00:21:13.297 "name": "BaseBdev2", 00:21:13.297 "uuid": "f855b0c5-873e-510c-b620-4e28eeee29bf", 00:21:13.297 "is_configured": true, 00:21:13.297 "data_offset": 0, 00:21:13.297 "data_size": 65536 00:21:13.297 }, 00:21:13.297 { 00:21:13.297 "name": "BaseBdev3", 00:21:13.297 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:13.297 "is_configured": true, 00:21:13.297 "data_offset": 0, 00:21:13.297 "data_size": 65536 00:21:13.297 }, 00:21:13.297 { 00:21:13.297 "name": "BaseBdev4", 00:21:13.297 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:13.297 "is_configured": true, 00:21:13.297 "data_offset": 0, 00:21:13.297 "data_size": 65536 00:21:13.297 } 00:21:13.297 ] 00:21:13.297 }' 00:21:13.297 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.298 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.863 [2024-12-09 23:02:29.576896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.863 [2024-12-09 23:02:29.660335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.863 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.863 "name": "raid_bdev1", 00:21:13.863 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:13.863 "strip_size_kb": 0, 00:21:13.863 "state": "online", 00:21:13.863 "raid_level": "raid1", 00:21:13.863 "superblock": false, 00:21:13.863 "num_base_bdevs": 4, 00:21:13.863 "num_base_bdevs_discovered": 3, 00:21:13.863 "num_base_bdevs_operational": 3, 00:21:13.863 "base_bdevs_list": [ 00:21:13.863 { 00:21:13.863 "name": null, 00:21:13.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.863 "is_configured": false, 00:21:13.863 "data_offset": 0, 00:21:13.863 "data_size": 65536 00:21:13.863 }, 00:21:13.863 { 00:21:13.863 "name": "BaseBdev2", 00:21:13.863 "uuid": "f855b0c5-873e-510c-b620-4e28eeee29bf", 00:21:13.863 "is_configured": true, 00:21:13.863 "data_offset": 0, 00:21:13.863 "data_size": 65536 00:21:13.863 }, 00:21:13.863 { 00:21:13.863 "name": "BaseBdev3", 00:21:13.863 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:13.863 "is_configured": true, 00:21:13.863 "data_offset": 0, 00:21:13.863 "data_size": 65536 00:21:13.864 }, 00:21:13.864 { 00:21:13.864 "name": "BaseBdev4", 00:21:13.864 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:13.864 "is_configured": true, 00:21:13.864 "data_offset": 0, 00:21:13.864 "data_size": 65536 00:21:13.864 } 00:21:13.864 ] 00:21:13.864 }' 00:21:13.864 23:02:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.864 23:02:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.122 [2024-12-09 23:02:29.796761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:14.122 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:14.122 Zero copy mechanism will not be used. 00:21:14.122 Running I/O for 60 seconds... 00:21:14.380 23:02:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:14.380 23:02:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.380 23:02:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.380 [2024-12-09 23:02:30.137757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.380 23:02:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.380 23:02:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:14.380 [2024-12-09 23:02:30.209161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:14.380 [2024-12-09 23:02:30.211571] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:14.640 [2024-12-09 23:02:30.320664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:14.640 [2024-12-09 23:02:30.322384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:14.899 [2024-12-09 23:02:30.557045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:14.899 [2024-12-09 23:02:30.557985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:15.160 167.00 IOPS, 501.00 MiB/s [2024-12-09T23:02:31.016Z] [2024-12-09 23:02:30.915648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:15.421 [2024-12-09 23:02:31.034649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:15.421 [2024-12-09 23:02:31.035114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.421 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.421 "name": "raid_bdev1", 00:21:15.421 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:15.421 "strip_size_kb": 0, 00:21:15.421 "state": "online", 00:21:15.421 "raid_level": "raid1", 00:21:15.421 "superblock": false, 00:21:15.421 "num_base_bdevs": 4, 00:21:15.421 "num_base_bdevs_discovered": 4, 00:21:15.421 "num_base_bdevs_operational": 4, 00:21:15.421 "process": { 00:21:15.421 "type": "rebuild", 00:21:15.421 "target": "spare", 00:21:15.421 "progress": { 00:21:15.421 "blocks": 12288, 00:21:15.421 "percent": 18 00:21:15.421 } 00:21:15.421 }, 00:21:15.421 "base_bdevs_list": [ 00:21:15.421 { 00:21:15.421 "name": "spare", 00:21:15.421 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:15.421 "is_configured": true, 00:21:15.421 "data_offset": 0, 00:21:15.421 "data_size": 65536 00:21:15.421 }, 00:21:15.421 { 00:21:15.421 "name": "BaseBdev2", 00:21:15.421 "uuid": "f855b0c5-873e-510c-b620-4e28eeee29bf", 00:21:15.421 "is_configured": true, 00:21:15.421 "data_offset": 0, 00:21:15.421 "data_size": 65536 00:21:15.421 }, 00:21:15.421 { 00:21:15.421 "name": "BaseBdev3", 00:21:15.421 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:15.421 "is_configured": true, 00:21:15.421 "data_offset": 0, 00:21:15.422 "data_size": 65536 00:21:15.422 }, 00:21:15.422 { 00:21:15.422 "name": "BaseBdev4", 00:21:15.422 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:15.422 "is_configured": true, 00:21:15.422 "data_offset": 0, 00:21:15.422 "data_size": 65536 00:21:15.422 } 00:21:15.422 ] 00:21:15.422 }' 00:21:15.422 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.681 [2024-12-09 23:02:31.281646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:15.681 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.681 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.681 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.681 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:15.681 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.681 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:15.681 [2024-12-09 23:02:31.323758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.681 [2024-12-09 23:02:31.407541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:15.681 [2024-12-09 23:02:31.408333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:15.681 [2024-12-09 23:02:31.517963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:15.681 [2024-12-09 23:02:31.522585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.681 [2024-12-09 23:02:31.522637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.681 [2024-12-09 23:02:31.522653] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:15.944 [2024-12-09 23:02:31.549825] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:15.944 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.944 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:15.944 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.945 "name": "raid_bdev1", 00:21:15.945 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:15.945 "strip_size_kb": 0, 00:21:15.945 "state": "online", 00:21:15.945 "raid_level": "raid1", 00:21:15.945 "superblock": false, 00:21:15.945 "num_base_bdevs": 4, 00:21:15.945 "num_base_bdevs_discovered": 3, 00:21:15.945 "num_base_bdevs_operational": 3, 00:21:15.945 "base_bdevs_list": [ 00:21:15.945 { 00:21:15.945 "name": null, 00:21:15.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.945 "is_configured": false, 00:21:15.945 "data_offset": 0, 00:21:15.945 "data_size": 65536 00:21:15.945 }, 00:21:15.945 { 00:21:15.945 "name": "BaseBdev2", 00:21:15.945 "uuid": "f855b0c5-873e-510c-b620-4e28eeee29bf", 00:21:15.945 "is_configured": true, 00:21:15.945 "data_offset": 0, 00:21:15.945 "data_size": 65536 00:21:15.945 }, 00:21:15.945 { 00:21:15.945 "name": "BaseBdev3", 00:21:15.945 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:15.945 "is_configured": true, 00:21:15.945 "data_offset": 0, 00:21:15.945 "data_size": 65536 00:21:15.945 }, 00:21:15.945 { 00:21:15.945 "name": "BaseBdev4", 00:21:15.945 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:15.945 "is_configured": true, 00:21:15.945 "data_offset": 0, 00:21:15.945 "data_size": 65536 00:21:15.945 } 00:21:15.945 ] 00:21:15.945 }' 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.945 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 139.00 IOPS, 417.00 MiB/s [2024-12-09T23:02:32.062Z] 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.206 23:02:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 23:02:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.206 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.206 "name": "raid_bdev1", 00:21:16.206 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:16.206 "strip_size_kb": 0, 00:21:16.206 "state": "online", 00:21:16.206 "raid_level": "raid1", 00:21:16.206 "superblock": false, 00:21:16.206 "num_base_bdevs": 4, 00:21:16.206 "num_base_bdevs_discovered": 3, 00:21:16.206 "num_base_bdevs_operational": 3, 00:21:16.206 "base_bdevs_list": [ 00:21:16.206 { 00:21:16.206 "name": null, 00:21:16.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.206 "is_configured": false, 00:21:16.206 "data_offset": 0, 00:21:16.206 "data_size": 65536 00:21:16.206 }, 00:21:16.206 { 00:21:16.206 "name": "BaseBdev2", 00:21:16.206 "uuid": "f855b0c5-873e-510c-b620-4e28eeee29bf", 00:21:16.206 "is_configured": true, 00:21:16.206 "data_offset": 0, 00:21:16.206 "data_size": 65536 00:21:16.206 }, 00:21:16.206 { 00:21:16.206 "name": "BaseBdev3", 00:21:16.206 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:16.206 "is_configured": true, 00:21:16.206 "data_offset": 0, 00:21:16.206 "data_size": 65536 00:21:16.206 }, 00:21:16.206 { 00:21:16.206 "name": "BaseBdev4", 00:21:16.206 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:16.206 "is_configured": true, 00:21:16.206 "data_offset": 0, 00:21:16.206 "data_size": 65536 00:21:16.206 } 00:21:16.206 ] 00:21:16.206 }' 00:21:16.206 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:16.466 [2024-12-09 23:02:32.125454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.466 23:02:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:16.466 [2024-12-09 23:02:32.198293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:16.466 [2024-12-09 23:02:32.200621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:16.725 [2024-12-09 23:02:32.320800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:16.725 [2024-12-09 23:02:32.321430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:16.725 [2024-12-09 23:02:32.443350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:16.725 [2024-12-09 23:02:32.444200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:16.983 [2024-12-09 23:02:32.786968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:16.983 [2024-12-09 23:02:32.787779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:17.243 143.67 IOPS, 431.00 MiB/s [2024-12-09T23:02:33.099Z] [2024-12-09 23:02:32.908435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:17.243 [2024-12-09 23:02:32.909336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.512 [2024-12-09 23:02:33.221941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.512 "name": "raid_bdev1", 00:21:17.512 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:17.512 "strip_size_kb": 0, 00:21:17.512 "state": "online", 00:21:17.512 "raid_level": "raid1", 00:21:17.512 "superblock": false, 00:21:17.512 "num_base_bdevs": 4, 00:21:17.512 "num_base_bdevs_discovered": 4, 00:21:17.512 "num_base_bdevs_operational": 4, 00:21:17.512 "process": { 00:21:17.512 "type": "rebuild", 00:21:17.512 "target": "spare", 00:21:17.512 "progress": { 00:21:17.512 "blocks": 12288, 00:21:17.512 "percent": 18 00:21:17.512 } 00:21:17.512 }, 00:21:17.512 "base_bdevs_list": [ 00:21:17.512 { 00:21:17.512 "name": "spare", 00:21:17.512 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:17.512 "is_configured": true, 00:21:17.512 "data_offset": 0, 00:21:17.512 "data_size": 65536 00:21:17.512 }, 00:21:17.512 { 00:21:17.512 "name": "BaseBdev2", 00:21:17.512 "uuid": "f855b0c5-873e-510c-b620-4e28eeee29bf", 00:21:17.512 "is_configured": true, 00:21:17.512 "data_offset": 0, 00:21:17.512 "data_size": 65536 00:21:17.512 }, 00:21:17.512 { 00:21:17.512 "name": "BaseBdev3", 00:21:17.512 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:17.512 "is_configured": true, 00:21:17.512 "data_offset": 0, 00:21:17.512 "data_size": 65536 00:21:17.512 }, 00:21:17.512 { 00:21:17.512 "name": "BaseBdev4", 00:21:17.512 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:17.512 "is_configured": true, 00:21:17.512 "data_offset": 0, 00:21:17.512 "data_size": 65536 00:21:17.512 } 00:21:17.512 ] 00:21:17.512 }' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.512 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.512 [2024-12-09 23:02:33.332172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:17.512 [2024-12-09 23:02:33.340005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:17.512 [2024-12-09 23:02:33.340907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:17.770 [2024-12-09 23:02:33.450379] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:17.770 [2024-12-09 23:02:33.450542] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.770 [2024-12-09 23:02:33.468112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 1 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.770 2288 offset_end: 18432 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.770 "name": "raid_bdev1", 00:21:17.770 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:17.770 "strip_size_kb": 0, 00:21:17.770 "state": "online", 00:21:17.770 "raid_level": "raid1", 00:21:17.770 "superblock": false, 00:21:17.770 "num_base_bdevs": 4, 00:21:17.770 "num_base_bdevs_discovered": 3, 00:21:17.770 "num_base_bdevs_operational": 3, 00:21:17.770 "process": { 00:21:17.770 "type": "rebuild", 00:21:17.770 "target": "spare", 00:21:17.770 "progress": { 00:21:17.770 "blocks": 16384, 00:21:17.770 "percent": 25 00:21:17.770 } 00:21:17.770 }, 00:21:17.770 "base_bdevs_list": [ 00:21:17.770 { 00:21:17.770 "name": "spare", 00:21:17.770 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:17.770 "is_configured": true, 00:21:17.770 "data_offset": 0, 00:21:17.770 "data_size": 65536 00:21:17.770 }, 00:21:17.770 { 00:21:17.770 "name": null, 00:21:17.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.770 "is_configured": false, 00:21:17.770 "data_offset": 0, 00:21:17.770 "data_size": 65536 00:21:17.770 }, 00:21:17.770 { 00:21:17.770 "name": "BaseBdev3", 00:21:17.770 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:17.770 "is_configured": true, 00:21:17.770 "data_offset": 0, 00:21:17.770 "data_size": 65536 00:21:17.770 }, 00:21:17.770 { 00:21:17.770 "name": "BaseBdev4", 00:21:17.770 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:17.770 "is_configured": true, 00:21:17.770 "data_offset": 0, 00:21:17.770 "data_size": 65536 00:21:17.770 } 00:21:17.770 ] 00:21:17.770 }' 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=511 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.770 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.771 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.771 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.771 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.771 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.771 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:18.029 23:02:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.029 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.029 "name": "raid_bdev1", 00:21:18.029 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:18.029 "strip_size_kb": 0, 00:21:18.029 "state": "online", 00:21:18.029 "raid_level": "raid1", 00:21:18.029 "superblock": false, 00:21:18.029 "num_base_bdevs": 4, 00:21:18.029 "num_base_bdevs_discovered": 3, 00:21:18.029 "num_base_bdevs_operational": 3, 00:21:18.029 "process": { 00:21:18.029 "type": "rebuild", 00:21:18.029 "target": "spare", 00:21:18.029 "progress": { 00:21:18.029 "blocks": 16384, 00:21:18.029 "percent": 25 00:21:18.029 } 00:21:18.029 }, 00:21:18.029 "base_bdevs_list": [ 00:21:18.029 { 00:21:18.029 "name": "spare", 00:21:18.029 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:18.029 "is_configured": true, 00:21:18.029 "data_offset": 0, 00:21:18.029 "data_size": 65536 00:21:18.029 }, 00:21:18.029 { 00:21:18.029 "name": null, 00:21:18.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.029 "is_configured": false, 00:21:18.029 "data_offset": 0, 00:21:18.029 "data_size": 65536 00:21:18.029 }, 00:21:18.029 { 00:21:18.029 "name": "BaseBdev3", 00:21:18.029 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:18.029 "is_configured": true, 00:21:18.029 "data_offset": 0, 00:21:18.029 "data_size": 65536 00:21:18.029 }, 00:21:18.030 { 00:21:18.030 "name": "BaseBdev4", 00:21:18.030 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:18.030 "is_configured": true, 00:21:18.030 "data_offset": 0, 00:21:18.030 "data_size": 65536 00:21:18.030 } 00:21:18.030 ] 00:21:18.030 }' 00:21:18.030 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.030 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.030 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.030 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.030 23:02:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.030 128.00 IOPS, 384.00 MiB/s [2024-12-09T23:02:33.886Z] [2024-12-09 23:02:33.818243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:18.030 [2024-12-09 23:02:33.819367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:18.288 [2024-12-09 23:02:34.021371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:18.288 [2024-12-09 23:02:34.022046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:18.547 [2024-12-09 23:02:34.357357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.112 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.112 "name": "raid_bdev1", 00:21:19.112 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:19.112 "strip_size_kb": 0, 00:21:19.112 "state": "online", 00:21:19.112 "raid_level": "raid1", 00:21:19.112 "superblock": false, 00:21:19.112 "num_base_bdevs": 4, 00:21:19.112 "num_base_bdevs_discovered": 3, 00:21:19.112 "num_base_bdevs_operational": 3, 00:21:19.112 "process": { 00:21:19.112 "type": "rebuild", 00:21:19.112 "target": "spare", 00:21:19.112 "progress": { 00:21:19.112 "blocks": 30720, 00:21:19.112 "percent": 46 00:21:19.112 } 00:21:19.112 }, 00:21:19.112 "base_bdevs_list": [ 00:21:19.112 { 00:21:19.113 "name": "spare", 00:21:19.113 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:19.113 "is_configured": true, 00:21:19.113 "data_offset": 0, 00:21:19.113 "data_size": 65536 00:21:19.113 }, 00:21:19.113 { 00:21:19.113 "name": null, 00:21:19.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.113 "is_configured": false, 00:21:19.113 "data_offset": 0, 00:21:19.113 "data_size": 65536 00:21:19.113 }, 00:21:19.113 { 00:21:19.113 "name": "BaseBdev3", 00:21:19.113 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:19.113 "is_configured": true, 00:21:19.113 "data_offset": 0, 00:21:19.113 "data_size": 65536 00:21:19.113 }, 00:21:19.113 { 00:21:19.113 "name": "BaseBdev4", 00:21:19.113 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:19.113 "is_configured": true, 00:21:19.113 "data_offset": 0, 00:21:19.113 "data_size": 65536 00:21:19.113 } 00:21:19.113 ] 00:21:19.113 }' 00:21:19.113 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.113 [2024-12-09 23:02:34.790429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:19.113 111.20 IOPS, 333.60 MiB/s [2024-12-09T23:02:34.969Z] 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.113 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.113 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.113 23:02:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.113 [2024-12-09 23:02:34.909056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:19.113 [2024-12-09 23:02:34.909662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:19.680 [2024-12-09 23:02:35.395076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:20.252 99.67 IOPS, 299.00 MiB/s [2024-12-09T23:02:36.108Z] 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.252 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.252 "name": "raid_bdev1", 00:21:20.252 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:20.252 "strip_size_kb": 0, 00:21:20.252 "state": "online", 00:21:20.252 "raid_level": "raid1", 00:21:20.252 "superblock": false, 00:21:20.252 "num_base_bdevs": 4, 00:21:20.252 "num_base_bdevs_discovered": 3, 00:21:20.252 "num_base_bdevs_operational": 3, 00:21:20.252 "process": { 00:21:20.252 "type": "rebuild", 00:21:20.253 "target": "spare", 00:21:20.253 "progress": { 00:21:20.253 "blocks": 45056, 00:21:20.253 "percent": 68 00:21:20.253 } 00:21:20.253 }, 00:21:20.253 "base_bdevs_list": [ 00:21:20.253 { 00:21:20.253 "name": "spare", 00:21:20.253 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:20.253 "is_configured": true, 00:21:20.253 "data_offset": 0, 00:21:20.253 "data_size": 65536 00:21:20.253 }, 00:21:20.253 { 00:21:20.253 "name": null, 00:21:20.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.253 "is_configured": false, 00:21:20.253 "data_offset": 0, 00:21:20.253 "data_size": 65536 00:21:20.253 }, 00:21:20.253 { 00:21:20.253 "name": "BaseBdev3", 00:21:20.253 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:20.253 "is_configured": true, 00:21:20.253 "data_offset": 0, 00:21:20.253 "data_size": 65536 00:21:20.253 }, 00:21:20.253 { 00:21:20.253 "name": "BaseBdev4", 00:21:20.253 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:20.253 "is_configured": true, 00:21:20.253 "data_offset": 0, 00:21:20.253 "data_size": 65536 00:21:20.253 } 00:21:20.253 ] 00:21:20.253 }' 00:21:20.253 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.253 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.253 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.253 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.253 23:02:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:20.253 [2024-12-09 23:02:36.076237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:21.216 91.29 IOPS, 273.86 MiB/s [2024-12-09T23:02:37.072Z] [2024-12-09 23:02:36.958850] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:21.216 23:02:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:21.216 23:02:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.216 23:02:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.216 23:02:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.216 23:02:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.216 23:02:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.216 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.216 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.216 23:02:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.216 23:02:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.216 23:02:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.216 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.216 "name": "raid_bdev1", 00:21:21.216 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:21.216 "strip_size_kb": 0, 00:21:21.216 "state": "online", 00:21:21.216 "raid_level": "raid1", 00:21:21.216 "superblock": false, 00:21:21.216 "num_base_bdevs": 4, 00:21:21.216 "num_base_bdevs_discovered": 3, 00:21:21.216 "num_base_bdevs_operational": 3, 00:21:21.216 "process": { 00:21:21.216 "type": "rebuild", 00:21:21.216 "target": "spare", 00:21:21.216 "progress": { 00:21:21.216 "blocks": 65536, 00:21:21.216 "percent": 100 00:21:21.216 } 00:21:21.216 }, 00:21:21.216 "base_bdevs_list": [ 00:21:21.216 { 00:21:21.216 "name": "spare", 00:21:21.216 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:21.216 "is_configured": true, 00:21:21.216 "data_offset": 0, 00:21:21.216 "data_size": 65536 00:21:21.217 }, 00:21:21.217 { 00:21:21.217 "name": null, 00:21:21.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.217 "is_configured": false, 00:21:21.217 "data_offset": 0, 00:21:21.217 "data_size": 65536 00:21:21.217 }, 00:21:21.217 { 00:21:21.217 "name": "BaseBdev3", 00:21:21.217 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:21.217 "is_configured": true, 00:21:21.217 "data_offset": 0, 00:21:21.217 "data_size": 65536 00:21:21.217 }, 00:21:21.217 { 00:21:21.217 "name": "BaseBdev4", 00:21:21.217 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:21.217 "is_configured": true, 00:21:21.217 "data_offset": 0, 00:21:21.217 "data_size": 65536 00:21:21.217 } 00:21:21.217 ] 00:21:21.217 }' 00:21:21.217 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.217 [2024-12-09 23:02:37.065471] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:21.217 [2024-12-09 23:02:37.070084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.474 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.474 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.474 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.474 23:02:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:22.298 83.25 IOPS, 249.75 MiB/s [2024-12-09T23:02:38.154Z] 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.298 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.298 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.298 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.298 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.298 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.556 "name": "raid_bdev1", 00:21:22.556 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:22.556 "strip_size_kb": 0, 00:21:22.556 "state": "online", 00:21:22.556 "raid_level": "raid1", 00:21:22.556 "superblock": false, 00:21:22.556 "num_base_bdevs": 4, 00:21:22.556 "num_base_bdevs_discovered": 3, 00:21:22.556 "num_base_bdevs_operational": 3, 00:21:22.556 "base_bdevs_list": [ 00:21:22.556 { 00:21:22.556 "name": "spare", 00:21:22.556 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:22.556 "is_configured": true, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 }, 00:21:22.556 { 00:21:22.556 "name": null, 00:21:22.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.556 "is_configured": false, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 }, 00:21:22.556 { 00:21:22.556 "name": "BaseBdev3", 00:21:22.556 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:22.556 "is_configured": true, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 }, 00:21:22.556 { 00:21:22.556 "name": "BaseBdev4", 00:21:22.556 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:22.556 "is_configured": true, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 } 00:21:22.556 ] 00:21:22.556 }' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.556 "name": "raid_bdev1", 00:21:22.556 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:22.556 "strip_size_kb": 0, 00:21:22.556 "state": "online", 00:21:22.556 "raid_level": "raid1", 00:21:22.556 "superblock": false, 00:21:22.556 "num_base_bdevs": 4, 00:21:22.556 "num_base_bdevs_discovered": 3, 00:21:22.556 "num_base_bdevs_operational": 3, 00:21:22.556 "base_bdevs_list": [ 00:21:22.556 { 00:21:22.556 "name": "spare", 00:21:22.556 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:22.556 "is_configured": true, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 }, 00:21:22.556 { 00:21:22.556 "name": null, 00:21:22.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.556 "is_configured": false, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 }, 00:21:22.556 { 00:21:22.556 "name": "BaseBdev3", 00:21:22.556 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:22.556 "is_configured": true, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 }, 00:21:22.556 { 00:21:22.556 "name": "BaseBdev4", 00:21:22.556 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:22.556 "is_configured": true, 00:21:22.556 "data_offset": 0, 00:21:22.556 "data_size": 65536 00:21:22.556 } 00:21:22.556 ] 00:21:22.556 }' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:22.556 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.878 "name": "raid_bdev1", 00:21:22.878 "uuid": "55b33fb1-aec7-4bef-b50d-31eda87ad596", 00:21:22.878 "strip_size_kb": 0, 00:21:22.878 "state": "online", 00:21:22.878 "raid_level": "raid1", 00:21:22.878 "superblock": false, 00:21:22.878 "num_base_bdevs": 4, 00:21:22.878 "num_base_bdevs_discovered": 3, 00:21:22.878 "num_base_bdevs_operational": 3, 00:21:22.878 "base_bdevs_list": [ 00:21:22.878 { 00:21:22.878 "name": "spare", 00:21:22.878 "uuid": "dcb01394-ff55-5768-979c-63304e540b45", 00:21:22.878 "is_configured": true, 00:21:22.878 "data_offset": 0, 00:21:22.878 "data_size": 65536 00:21:22.878 }, 00:21:22.878 { 00:21:22.878 "name": null, 00:21:22.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.878 "is_configured": false, 00:21:22.878 "data_offset": 0, 00:21:22.878 "data_size": 65536 00:21:22.878 }, 00:21:22.878 { 00:21:22.878 "name": "BaseBdev3", 00:21:22.878 "uuid": "16fa809c-a91a-5a9d-8b54-e867901af078", 00:21:22.878 "is_configured": true, 00:21:22.878 "data_offset": 0, 00:21:22.878 "data_size": 65536 00:21:22.878 }, 00:21:22.878 { 00:21:22.878 "name": "BaseBdev4", 00:21:22.878 "uuid": "ad385082-6b0e-58c7-ac3d-8977d240ead4", 00:21:22.878 "is_configured": true, 00:21:22.878 "data_offset": 0, 00:21:22.878 "data_size": 65536 00:21:22.878 } 00:21:22.878 ] 00:21:22.878 }' 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.878 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.150 78.22 IOPS, 234.67 MiB/s [2024-12-09T23:02:39.006Z] 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.150 [2024-12-09 23:02:38.858233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.150 [2024-12-09 23:02:38.858278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.150 00:21:23.150 Latency(us) 00:21:23.150 [2024-12-09T23:02:39.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.150 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:23.150 raid_bdev1 : 9.14 77.49 232.48 0.00 0.00 17677.82 347.00 119052.30 00:21:23.150 [2024-12-09T23:02:39.006Z] =================================================================================================================== 00:21:23.150 [2024-12-09T23:02:39.006Z] Total : 77.49 232.48 0.00 0.00 17677.82 347.00 119052.30 00:21:23.150 [2024-12-09 23:02:38.939927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.150 [2024-12-09 23:02:38.940018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.150 [2024-12-09 23:02:38.940128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.150 [2024-12-09 23:02:38.940141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:23.150 { 00:21:23.150 "results": [ 00:21:23.150 { 00:21:23.150 "job": "raid_bdev1", 00:21:23.150 "core_mask": "0x1", 00:21:23.150 "workload": "randrw", 00:21:23.150 "percentage": 50, 00:21:23.150 "status": "finished", 00:21:23.150 "queue_depth": 2, 00:21:23.150 "io_size": 3145728, 00:21:23.150 "runtime": 9.136466, 00:21:23.150 "iops": 77.49166909831438, 00:21:23.150 "mibps": 232.4750072949431, 00:21:23.150 "io_failed": 0, 00:21:23.150 "io_timeout": 0, 00:21:23.150 "avg_latency_us": 17677.821587348582, 00:21:23.150 "min_latency_us": 346.99737991266375, 00:21:23.150 "max_latency_us": 119052.29694323144 00:21:23.150 } 00:21:23.150 ], 00:21:23.150 "core_count": 1 00:21:23.150 } 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:23.150 23:02:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.150 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:23.408 /dev/nbd0 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.667 1+0 records in 00:21:23.667 1+0 records out 00:21:23.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311854 s, 13.1 MB/s 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.667 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:23.667 /dev/nbd1 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.925 1+0 records in 00:21:23.925 1+0 records out 00:21:23.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439547 s, 9.3 MB/s 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.925 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:23.926 23:02:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:24.493 /dev/nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.493 1+0 records in 00:21:24.493 1+0 records out 00:21:24.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290985 s, 14.1 MB/s 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:24.493 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.752 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.014 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:25.015 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.015 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:25.015 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:25.015 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:25.015 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.015 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79394 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79394 ']' 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79394 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79394 00:21:25.273 killing process with pid 79394 00:21:25.273 Received shutdown signal, test time was about 11.184601 seconds 00:21:25.273 00:21:25.273 Latency(us) 00:21:25.273 [2024-12-09T23:02:41.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.273 [2024-12-09T23:02:41.129Z] =================================================================================================================== 00:21:25.273 [2024-12-09T23:02:41.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79394' 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79394 00:21:25.273 [2024-12-09 23:02:40.962204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.273 23:02:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79394 00:21:25.839 [2024-12-09 23:02:41.443965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:27.216 00:21:27.216 real 0m14.914s 00:21:27.216 user 0m18.605s 00:21:27.216 sys 0m1.898s 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.216 ************************************ 00:21:27.216 END TEST raid_rebuild_test_io 00:21:27.216 ************************************ 00:21:27.216 23:02:42 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:27.216 23:02:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:27.216 23:02:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.216 23:02:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.216 ************************************ 00:21:27.216 START TEST raid_rebuild_test_sb_io 00:21:27.216 ************************************ 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79823 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79823 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79823 ']' 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.216 23:02:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.216 [2024-12-09 23:02:42.894080] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:21:27.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.216 Zero copy mechanism will not be used. 00:21:27.216 [2024-12-09 23:02:42.894288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79823 ] 00:21:27.477 [2024-12-09 23:02:43.072036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.477 [2024-12-09 23:02:43.198915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.738 [2024-12-09 23:02:43.416415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.738 [2024-12-09 23:02:43.416462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 BaseBdev1_malloc 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 [2024-12-09 23:02:43.823039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:27.997 [2024-12-09 23:02:43.823205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.997 [2024-12-09 23:02:43.823239] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:27.997 [2024-12-09 23:02:43.823253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.997 [2024-12-09 23:02:43.825833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.997 [2024-12-09 23:02:43.825886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:27.997 BaseBdev1 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.257 BaseBdev2_malloc 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.257 [2024-12-09 23:02:43.884053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:28.257 [2024-12-09 23:02:43.884203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.257 [2024-12-09 23:02:43.884230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:28.257 [2024-12-09 23:02:43.884242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.257 [2024-12-09 23:02:43.886409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.257 [2024-12-09 23:02:43.886453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.257 BaseBdev2 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.257 BaseBdev3_malloc 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.257 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.257 [2024-12-09 23:02:43.956343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:28.258 [2024-12-09 23:02:43.956497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.258 [2024-12-09 23:02:43.956530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:28.258 [2024-12-09 23:02:43.956544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.258 [2024-12-09 23:02:43.958869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.258 [2024-12-09 23:02:43.958913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:28.258 BaseBdev3 00:21:28.258 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.258 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:28.258 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 BaseBdev4_malloc 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 [2024-12-09 23:02:44.015876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:28.258 [2024-12-09 23:02:44.015946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.258 [2024-12-09 23:02:44.015971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:28.258 [2024-12-09 23:02:44.015984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.258 [2024-12-09 23:02:44.018268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.258 [2024-12-09 23:02:44.018313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:28.258 BaseBdev4 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 spare_malloc 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 spare_delay 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 [2024-12-09 23:02:44.087349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.258 [2024-12-09 23:02:44.087409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.258 [2024-12-09 23:02:44.087429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:28.258 [2024-12-09 23:02:44.087440] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.258 [2024-12-09 23:02:44.089854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.258 [2024-12-09 23:02:44.089896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.258 spare 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 [2024-12-09 23:02:44.099414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.258 [2024-12-09 23:02:44.101600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.258 [2024-12-09 23:02:44.101674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.258 [2024-12-09 23:02:44.101738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:28.258 [2024-12-09 23:02:44.101958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:28.258 [2024-12-09 23:02:44.101974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:28.258 [2024-12-09 23:02:44.102267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:28.258 [2024-12-09 23:02:44.102491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:28.258 [2024-12-09 23:02:44.102505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:28.258 [2024-12-09 23:02:44.102708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.258 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.517 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.517 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.517 "name": "raid_bdev1", 00:21:28.517 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:28.517 "strip_size_kb": 0, 00:21:28.517 "state": "online", 00:21:28.517 "raid_level": "raid1", 00:21:28.517 "superblock": true, 00:21:28.517 "num_base_bdevs": 4, 00:21:28.517 "num_base_bdevs_discovered": 4, 00:21:28.517 "num_base_bdevs_operational": 4, 00:21:28.517 "base_bdevs_list": [ 00:21:28.517 { 00:21:28.517 "name": "BaseBdev1", 00:21:28.517 "uuid": "bfad6bee-d14a-549d-86e0-a308adadc687", 00:21:28.517 "is_configured": true, 00:21:28.517 "data_offset": 2048, 00:21:28.517 "data_size": 63488 00:21:28.517 }, 00:21:28.517 { 00:21:28.517 "name": "BaseBdev2", 00:21:28.517 "uuid": "943e4599-daf8-55bd-a4aa-823612161252", 00:21:28.517 "is_configured": true, 00:21:28.517 "data_offset": 2048, 00:21:28.517 "data_size": 63488 00:21:28.517 }, 00:21:28.517 { 00:21:28.517 "name": "BaseBdev3", 00:21:28.517 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:28.517 "is_configured": true, 00:21:28.517 "data_offset": 2048, 00:21:28.517 "data_size": 63488 00:21:28.517 }, 00:21:28.517 { 00:21:28.517 "name": "BaseBdev4", 00:21:28.517 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:28.517 "is_configured": true, 00:21:28.517 "data_offset": 2048, 00:21:28.517 "data_size": 63488 00:21:28.517 } 00:21:28.517 ] 00:21:28.517 }' 00:21:28.517 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.517 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.776 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:28.776 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:28.776 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.776 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.776 [2024-12-09 23:02:44.602935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.776 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.034 [2024-12-09 23:02:44.698374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.034 "name": "raid_bdev1", 00:21:29.034 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:29.034 "strip_size_kb": 0, 00:21:29.034 "state": "online", 00:21:29.034 "raid_level": "raid1", 00:21:29.034 "superblock": true, 00:21:29.034 "num_base_bdevs": 4, 00:21:29.034 "num_base_bdevs_discovered": 3, 00:21:29.034 "num_base_bdevs_operational": 3, 00:21:29.034 "base_bdevs_list": [ 00:21:29.034 { 00:21:29.034 "name": null, 00:21:29.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.034 "is_configured": false, 00:21:29.034 "data_offset": 0, 00:21:29.034 "data_size": 63488 00:21:29.034 }, 00:21:29.034 { 00:21:29.034 "name": "BaseBdev2", 00:21:29.034 "uuid": "943e4599-daf8-55bd-a4aa-823612161252", 00:21:29.034 "is_configured": true, 00:21:29.034 "data_offset": 2048, 00:21:29.034 "data_size": 63488 00:21:29.034 }, 00:21:29.034 { 00:21:29.034 "name": "BaseBdev3", 00:21:29.034 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:29.034 "is_configured": true, 00:21:29.034 "data_offset": 2048, 00:21:29.034 "data_size": 63488 00:21:29.034 }, 00:21:29.034 { 00:21:29.034 "name": "BaseBdev4", 00:21:29.034 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:29.034 "is_configured": true, 00:21:29.034 "data_offset": 2048, 00:21:29.034 "data_size": 63488 00:21:29.034 } 00:21:29.034 ] 00:21:29.034 }' 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.034 23:02:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.034 [2024-12-09 23:02:44.803221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:29.034 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:29.034 Zero copy mechanism will not be used. 00:21:29.034 Running I/O for 60 seconds... 00:21:29.291 23:02:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:29.291 23:02:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.291 23:02:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.291 [2024-12-09 23:02:45.085138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.291 23:02:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.292 23:02:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:29.292 [2024-12-09 23:02:45.142729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:29.292 [2024-12-09 23:02:45.145043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:29.549 [2024-12-09 23:02:45.274687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:29.885 [2024-12-09 23:02:45.511759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:29.886 [2024-12-09 23:02:45.512730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:30.160 141.00 IOPS, 423.00 MiB/s [2024-12-09T23:02:46.016Z] [2024-12-09 23:02:45.865283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:30.160 [2024-12-09 23:02:46.001029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:30.160 [2024-12-09 23:02:46.001863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.418 "name": "raid_bdev1", 00:21:30.418 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:30.418 "strip_size_kb": 0, 00:21:30.418 "state": "online", 00:21:30.418 "raid_level": "raid1", 00:21:30.418 "superblock": true, 00:21:30.418 "num_base_bdevs": 4, 00:21:30.418 "num_base_bdevs_discovered": 4, 00:21:30.418 "num_base_bdevs_operational": 4, 00:21:30.418 "process": { 00:21:30.418 "type": "rebuild", 00:21:30.418 "target": "spare", 00:21:30.418 "progress": { 00:21:30.418 "blocks": 10240, 00:21:30.418 "percent": 16 00:21:30.418 } 00:21:30.418 }, 00:21:30.418 "base_bdevs_list": [ 00:21:30.418 { 00:21:30.418 "name": "spare", 00:21:30.418 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:30.418 "is_configured": true, 00:21:30.418 "data_offset": 2048, 00:21:30.418 "data_size": 63488 00:21:30.418 }, 00:21:30.418 { 00:21:30.418 "name": "BaseBdev2", 00:21:30.418 "uuid": "943e4599-daf8-55bd-a4aa-823612161252", 00:21:30.418 "is_configured": true, 00:21:30.418 "data_offset": 2048, 00:21:30.418 "data_size": 63488 00:21:30.418 }, 00:21:30.418 { 00:21:30.418 "name": "BaseBdev3", 00:21:30.418 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:30.418 "is_configured": true, 00:21:30.418 "data_offset": 2048, 00:21:30.418 "data_size": 63488 00:21:30.418 }, 00:21:30.418 { 00:21:30.418 "name": "BaseBdev4", 00:21:30.418 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:30.418 "is_configured": true, 00:21:30.418 "data_offset": 2048, 00:21:30.418 "data_size": 63488 00:21:30.418 } 00:21:30.418 ] 00:21:30.418 }' 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.418 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.418 [2024-12-09 23:02:46.250335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.676 [2024-12-09 23:02:46.346090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:30.676 [2024-12-09 23:02:46.456207] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:30.676 [2024-12-09 23:02:46.470204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.676 [2024-12-09 23:02:46.470354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.676 [2024-12-09 23:02:46.470376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:30.676 [2024-12-09 23:02:46.500663] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.935 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.936 "name": "raid_bdev1", 00:21:30.936 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:30.936 "strip_size_kb": 0, 00:21:30.936 "state": "online", 00:21:30.936 "raid_level": "raid1", 00:21:30.936 "superblock": true, 00:21:30.936 "num_base_bdevs": 4, 00:21:30.936 "num_base_bdevs_discovered": 3, 00:21:30.936 "num_base_bdevs_operational": 3, 00:21:30.936 "base_bdevs_list": [ 00:21:30.936 { 00:21:30.936 "name": null, 00:21:30.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.936 "is_configured": false, 00:21:30.936 "data_offset": 0, 00:21:30.936 "data_size": 63488 00:21:30.936 }, 00:21:30.936 { 00:21:30.936 "name": "BaseBdev2", 00:21:30.936 "uuid": "943e4599-daf8-55bd-a4aa-823612161252", 00:21:30.936 "is_configured": true, 00:21:30.936 "data_offset": 2048, 00:21:30.936 "data_size": 63488 00:21:30.936 }, 00:21:30.936 { 00:21:30.936 "name": "BaseBdev3", 00:21:30.936 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:30.936 "is_configured": true, 00:21:30.936 "data_offset": 2048, 00:21:30.936 "data_size": 63488 00:21:30.936 }, 00:21:30.936 { 00:21:30.936 "name": "BaseBdev4", 00:21:30.936 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:30.936 "is_configured": true, 00:21:30.936 "data_offset": 2048, 00:21:30.936 "data_size": 63488 00:21:30.936 } 00:21:30.936 ] 00:21:30.936 }' 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.936 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.195 129.50 IOPS, 388.50 MiB/s [2024-12-09T23:02:47.051Z] 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.195 "name": "raid_bdev1", 00:21:31.195 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:31.195 "strip_size_kb": 0, 00:21:31.195 "state": "online", 00:21:31.195 "raid_level": "raid1", 00:21:31.195 "superblock": true, 00:21:31.195 "num_base_bdevs": 4, 00:21:31.195 "num_base_bdevs_discovered": 3, 00:21:31.195 "num_base_bdevs_operational": 3, 00:21:31.195 "base_bdevs_list": [ 00:21:31.195 { 00:21:31.195 "name": null, 00:21:31.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.195 "is_configured": false, 00:21:31.195 "data_offset": 0, 00:21:31.195 "data_size": 63488 00:21:31.195 }, 00:21:31.195 { 00:21:31.195 "name": "BaseBdev2", 00:21:31.195 "uuid": "943e4599-daf8-55bd-a4aa-823612161252", 00:21:31.195 "is_configured": true, 00:21:31.195 "data_offset": 2048, 00:21:31.195 "data_size": 63488 00:21:31.195 }, 00:21:31.195 { 00:21:31.195 "name": "BaseBdev3", 00:21:31.195 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:31.195 "is_configured": true, 00:21:31.195 "data_offset": 2048, 00:21:31.195 "data_size": 63488 00:21:31.195 }, 00:21:31.195 { 00:21:31.195 "name": "BaseBdev4", 00:21:31.195 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:31.195 "is_configured": true, 00:21:31.195 "data_offset": 2048, 00:21:31.195 "data_size": 63488 00:21:31.195 } 00:21:31.195 ] 00:21:31.195 }' 00:21:31.195 23:02:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.195 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.454 [2024-12-09 23:02:47.085556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.454 23:02:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:31.454 [2024-12-09 23:02:47.178160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:31.454 [2024-12-09 23:02:47.180432] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.713 [2024-12-09 23:02:47.320079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:31.713 [2024-12-09 23:02:47.457755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:31.713 [2024-12-09 23:02:47.458103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:31.972 [2024-12-09 23:02:47.684141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:31.972 [2024-12-09 23:02:47.684710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:31.972 [2024-12-09 23:02:47.803427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:31.972 [2024-12-09 23:02:47.803887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:32.540 139.00 IOPS, 417.00 MiB/s [2024-12-09T23:02:48.396Z] 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.540 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.540 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.540 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.540 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.541 [2024-12-09 23:02:48.148800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:32.541 [2024-12-09 23:02:48.150138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.541 "name": "raid_bdev1", 00:21:32.541 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:32.541 "strip_size_kb": 0, 00:21:32.541 "state": "online", 00:21:32.541 "raid_level": "raid1", 00:21:32.541 "superblock": true, 00:21:32.541 "num_base_bdevs": 4, 00:21:32.541 "num_base_bdevs_discovered": 4, 00:21:32.541 "num_base_bdevs_operational": 4, 00:21:32.541 "process": { 00:21:32.541 "type": "rebuild", 00:21:32.541 "target": "spare", 00:21:32.541 "progress": { 00:21:32.541 "blocks": 12288, 00:21:32.541 "percent": 19 00:21:32.541 } 00:21:32.541 }, 00:21:32.541 "base_bdevs_list": [ 00:21:32.541 { 00:21:32.541 "name": "spare", 00:21:32.541 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:32.541 "is_configured": true, 00:21:32.541 "data_offset": 2048, 00:21:32.541 "data_size": 63488 00:21:32.541 }, 00:21:32.541 { 00:21:32.541 "name": "BaseBdev2", 00:21:32.541 "uuid": "943e4599-daf8-55bd-a4aa-823612161252", 00:21:32.541 "is_configured": true, 00:21:32.541 "data_offset": 2048, 00:21:32.541 "data_size": 63488 00:21:32.541 }, 00:21:32.541 { 00:21:32.541 "name": "BaseBdev3", 00:21:32.541 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:32.541 "is_configured": true, 00:21:32.541 "data_offset": 2048, 00:21:32.541 "data_size": 63488 00:21:32.541 }, 00:21:32.541 { 00:21:32.541 "name": "BaseBdev4", 00:21:32.541 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:32.541 "is_configured": true, 00:21:32.541 "data_offset": 2048, 00:21:32.541 "data_size": 63488 00:21:32.541 } 00:21:32.541 ] 00:21:32.541 }' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:32.541 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.541 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.541 [2024-12-09 23:02:48.284466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.541 [2024-12-09 23:02:48.374923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:32.802 [2024-12-09 23:02:48.584510] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:32.802 [2024-12-09 23:02:48.584577] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.802 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.802 "name": "raid_bdev1", 00:21:32.802 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:32.802 "strip_size_kb": 0, 00:21:32.802 "state": "online", 00:21:32.802 "raid_level": "raid1", 00:21:32.802 "superblock": true, 00:21:32.802 "num_base_bdevs": 4, 00:21:32.802 "num_base_bdevs_discovered": 3, 00:21:32.802 "num_base_bdevs_operational": 3, 00:21:32.802 "process": { 00:21:32.802 "type": "rebuild", 00:21:32.802 "target": "spare", 00:21:32.802 "progress": { 00:21:32.802 "blocks": 16384, 00:21:32.802 "percent": 25 00:21:32.802 } 00:21:32.802 }, 00:21:32.802 "base_bdevs_list": [ 00:21:32.802 { 00:21:32.802 "name": "spare", 00:21:32.802 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:32.802 "is_configured": true, 00:21:32.802 "data_offset": 2048, 00:21:32.802 "data_size": 63488 00:21:32.802 }, 00:21:32.802 { 00:21:32.802 "name": null, 00:21:32.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.802 "is_configured": false, 00:21:32.802 "data_offset": 0, 00:21:32.802 "data_size": 63488 00:21:32.803 }, 00:21:32.803 { 00:21:32.803 "name": "BaseBdev3", 00:21:32.803 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:32.803 "is_configured": true, 00:21:32.803 "data_offset": 2048, 00:21:32.803 "data_size": 63488 00:21:32.803 }, 00:21:32.803 { 00:21:32.803 "name": "BaseBdev4", 00:21:32.803 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:32.803 "is_configured": true, 00:21:32.803 "data_offset": 2048, 00:21:32.803 "data_size": 63488 00:21:32.803 } 00:21:32.803 ] 00:21:32.803 }' 00:21:32.803 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.064 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.064 "name": "raid_bdev1", 00:21:33.064 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:33.064 "strip_size_kb": 0, 00:21:33.064 "state": "online", 00:21:33.064 "raid_level": "raid1", 00:21:33.064 "superblock": true, 00:21:33.064 "num_base_bdevs": 4, 00:21:33.064 "num_base_bdevs_discovered": 3, 00:21:33.064 "num_base_bdevs_operational": 3, 00:21:33.064 "process": { 00:21:33.064 "type": "rebuild", 00:21:33.064 "target": "spare", 00:21:33.064 "progress": { 00:21:33.064 "blocks": 18432, 00:21:33.064 "percent": 29 00:21:33.064 } 00:21:33.064 }, 00:21:33.064 "base_bdevs_list": [ 00:21:33.064 { 00:21:33.064 "name": "spare", 00:21:33.064 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:33.064 "is_configured": true, 00:21:33.064 "data_offset": 2048, 00:21:33.064 "data_size": 63488 00:21:33.064 }, 00:21:33.064 { 00:21:33.064 "name": null, 00:21:33.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.064 "is_configured": false, 00:21:33.064 "data_offset": 0, 00:21:33.064 "data_size": 63488 00:21:33.064 }, 00:21:33.064 { 00:21:33.064 "name": "BaseBdev3", 00:21:33.064 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:33.064 "is_configured": true, 00:21:33.064 "data_offset": 2048, 00:21:33.064 "data_size": 63488 00:21:33.064 }, 00:21:33.064 { 00:21:33.065 "name": "BaseBdev4", 00:21:33.065 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:33.065 "is_configured": true, 00:21:33.065 "data_offset": 2048, 00:21:33.065 "data_size": 63488 00:21:33.065 } 00:21:33.065 ] 00:21:33.065 }' 00:21:33.065 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.065 121.50 IOPS, 364.50 MiB/s [2024-12-09T23:02:48.921Z] 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.065 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.065 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.065 23:02:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:33.641 [2024-12-09 23:02:49.199504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:33.641 [2024-12-09 23:02:49.407645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:33.641 [2024-12-09 23:02:49.408247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:33.909 [2024-12-09 23:02:49.742575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:34.179 109.80 IOPS, 329.40 MiB/s [2024-12-09T23:02:50.035Z] 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.179 "name": "raid_bdev1", 00:21:34.179 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:34.179 "strip_size_kb": 0, 00:21:34.179 "state": "online", 00:21:34.179 "raid_level": "raid1", 00:21:34.179 "superblock": true, 00:21:34.179 "num_base_bdevs": 4, 00:21:34.179 "num_base_bdevs_discovered": 3, 00:21:34.179 "num_base_bdevs_operational": 3, 00:21:34.179 "process": { 00:21:34.179 "type": "rebuild", 00:21:34.179 "target": "spare", 00:21:34.179 "progress": { 00:21:34.179 "blocks": 32768, 00:21:34.179 "percent": 51 00:21:34.179 } 00:21:34.179 }, 00:21:34.179 "base_bdevs_list": [ 00:21:34.179 { 00:21:34.179 "name": "spare", 00:21:34.179 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:34.179 "is_configured": true, 00:21:34.179 "data_offset": 2048, 00:21:34.179 "data_size": 63488 00:21:34.179 }, 00:21:34.179 { 00:21:34.179 "name": null, 00:21:34.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.179 "is_configured": false, 00:21:34.179 "data_offset": 0, 00:21:34.179 "data_size": 63488 00:21:34.179 }, 00:21:34.179 { 00:21:34.179 "name": "BaseBdev3", 00:21:34.179 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:34.179 "is_configured": true, 00:21:34.179 "data_offset": 2048, 00:21:34.179 "data_size": 63488 00:21:34.179 }, 00:21:34.179 { 00:21:34.179 "name": "BaseBdev4", 00:21:34.179 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:34.179 "is_configured": true, 00:21:34.179 "data_offset": 2048, 00:21:34.179 "data_size": 63488 00:21:34.179 } 00:21:34.179 ] 00:21:34.179 }' 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.179 [2024-12-09 23:02:49.965090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:34.179 23:02:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.179 23:02:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:34.179 23:02:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:34.766 [2024-12-09 23:02:50.546682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:35.037 [2024-12-09 23:02:50.780701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:35.297 97.67 IOPS, 293.00 MiB/s [2024-12-09T23:02:51.153Z] 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.297 "name": "raid_bdev1", 00:21:35.297 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:35.297 "strip_size_kb": 0, 00:21:35.297 "state": "online", 00:21:35.297 "raid_level": "raid1", 00:21:35.297 "superblock": true, 00:21:35.297 "num_base_bdevs": 4, 00:21:35.297 "num_base_bdevs_discovered": 3, 00:21:35.297 "num_base_bdevs_operational": 3, 00:21:35.297 "process": { 00:21:35.297 "type": "rebuild", 00:21:35.297 "target": "spare", 00:21:35.297 "progress": { 00:21:35.297 "blocks": 49152, 00:21:35.297 "percent": 77 00:21:35.297 } 00:21:35.297 }, 00:21:35.297 "base_bdevs_list": [ 00:21:35.297 { 00:21:35.297 "name": "spare", 00:21:35.297 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:35.297 "is_configured": true, 00:21:35.297 "data_offset": 2048, 00:21:35.297 "data_size": 63488 00:21:35.297 }, 00:21:35.297 { 00:21:35.297 "name": null, 00:21:35.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.297 "is_configured": false, 00:21:35.297 "data_offset": 0, 00:21:35.297 "data_size": 63488 00:21:35.297 }, 00:21:35.297 { 00:21:35.297 "name": "BaseBdev3", 00:21:35.297 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:35.297 "is_configured": true, 00:21:35.297 "data_offset": 2048, 00:21:35.297 "data_size": 63488 00:21:35.297 }, 00:21:35.297 { 00:21:35.297 "name": "BaseBdev4", 00:21:35.297 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:35.297 "is_configured": true, 00:21:35.297 "data_offset": 2048, 00:21:35.297 "data_size": 63488 00:21:35.297 } 00:21:35.297 ] 00:21:35.297 }' 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.297 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.555 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.555 23:02:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:36.121 [2024-12-09 23:02:51.762727] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:36.121 88.71 IOPS, 266.14 MiB/s [2024-12-09T23:02:51.977Z] [2024-12-09 23:02:51.862547] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:36.121 [2024-12-09 23:02:51.865027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.380 "name": "raid_bdev1", 00:21:36.380 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:36.380 "strip_size_kb": 0, 00:21:36.380 "state": "online", 00:21:36.380 "raid_level": "raid1", 00:21:36.380 "superblock": true, 00:21:36.380 "num_base_bdevs": 4, 00:21:36.380 "num_base_bdevs_discovered": 3, 00:21:36.380 "num_base_bdevs_operational": 3, 00:21:36.380 "base_bdevs_list": [ 00:21:36.380 { 00:21:36.380 "name": "spare", 00:21:36.380 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:36.380 "is_configured": true, 00:21:36.380 "data_offset": 2048, 00:21:36.380 "data_size": 63488 00:21:36.380 }, 00:21:36.380 { 00:21:36.380 "name": null, 00:21:36.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.380 "is_configured": false, 00:21:36.380 "data_offset": 0, 00:21:36.380 "data_size": 63488 00:21:36.380 }, 00:21:36.380 { 00:21:36.380 "name": "BaseBdev3", 00:21:36.380 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:36.380 "is_configured": true, 00:21:36.380 "data_offset": 2048, 00:21:36.380 "data_size": 63488 00:21:36.380 }, 00:21:36.380 { 00:21:36.380 "name": "BaseBdev4", 00:21:36.380 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:36.380 "is_configured": true, 00:21:36.380 "data_offset": 2048, 00:21:36.380 "data_size": 63488 00:21:36.380 } 00:21:36.380 ] 00:21:36.380 }' 00:21:36.380 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.640 "name": "raid_bdev1", 00:21:36.640 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:36.640 "strip_size_kb": 0, 00:21:36.640 "state": "online", 00:21:36.640 "raid_level": "raid1", 00:21:36.640 "superblock": true, 00:21:36.640 "num_base_bdevs": 4, 00:21:36.640 "num_base_bdevs_discovered": 3, 00:21:36.640 "num_base_bdevs_operational": 3, 00:21:36.640 "base_bdevs_list": [ 00:21:36.640 { 00:21:36.640 "name": "spare", 00:21:36.640 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:36.640 "is_configured": true, 00:21:36.640 "data_offset": 2048, 00:21:36.640 "data_size": 63488 00:21:36.640 }, 00:21:36.640 { 00:21:36.640 "name": null, 00:21:36.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.640 "is_configured": false, 00:21:36.640 "data_offset": 0, 00:21:36.640 "data_size": 63488 00:21:36.640 }, 00:21:36.640 { 00:21:36.640 "name": "BaseBdev3", 00:21:36.640 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:36.640 "is_configured": true, 00:21:36.640 "data_offset": 2048, 00:21:36.640 "data_size": 63488 00:21:36.640 }, 00:21:36.640 { 00:21:36.640 "name": "BaseBdev4", 00:21:36.640 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:36.640 "is_configured": true, 00:21:36.640 "data_offset": 2048, 00:21:36.640 "data_size": 63488 00:21:36.640 } 00:21:36.640 ] 00:21:36.640 }' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.640 "name": "raid_bdev1", 00:21:36.640 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:36.640 "strip_size_kb": 0, 00:21:36.640 "state": "online", 00:21:36.640 "raid_level": "raid1", 00:21:36.640 "superblock": true, 00:21:36.640 "num_base_bdevs": 4, 00:21:36.640 "num_base_bdevs_discovered": 3, 00:21:36.640 "num_base_bdevs_operational": 3, 00:21:36.640 "base_bdevs_list": [ 00:21:36.640 { 00:21:36.640 "name": "spare", 00:21:36.640 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:36.640 "is_configured": true, 00:21:36.640 "data_offset": 2048, 00:21:36.640 "data_size": 63488 00:21:36.640 }, 00:21:36.640 { 00:21:36.640 "name": null, 00:21:36.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.640 "is_configured": false, 00:21:36.640 "data_offset": 0, 00:21:36.640 "data_size": 63488 00:21:36.640 }, 00:21:36.640 { 00:21:36.640 "name": "BaseBdev3", 00:21:36.640 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:36.640 "is_configured": true, 00:21:36.640 "data_offset": 2048, 00:21:36.640 "data_size": 63488 00:21:36.640 }, 00:21:36.640 { 00:21:36.640 "name": "BaseBdev4", 00:21:36.640 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:36.640 "is_configured": true, 00:21:36.640 "data_offset": 2048, 00:21:36.640 "data_size": 63488 00:21:36.640 } 00:21:36.640 ] 00:21:36.640 }' 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.640 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.209 83.50 IOPS, 250.50 MiB/s [2024-12-09T23:02:53.065Z] 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.209 [2024-12-09 23:02:52.836522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.209 [2024-12-09 23:02:52.836619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.209 00:21:37.209 Latency(us) 00:21:37.209 [2024-12-09T23:02:53.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.209 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:37.209 raid_bdev1 : 8.10 82.73 248.20 0.00 0.00 15953.85 373.83 119968.08 00:21:37.209 [2024-12-09T23:02:53.065Z] =================================================================================================================== 00:21:37.209 [2024-12-09T23:02:53.065Z] Total : 82.73 248.20 0.00 0.00 15953.85 373.83 119968.08 00:21:37.209 [2024-12-09 23:02:52.916192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.209 [2024-12-09 23:02:52.916290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.209 [2024-12-09 23:02:52.916410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.209 [2024-12-09 23:02:52.916425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:37.209 { 00:21:37.209 "results": [ 00:21:37.209 { 00:21:37.209 "job": "raid_bdev1", 00:21:37.209 "core_mask": "0x1", 00:21:37.209 "workload": "randrw", 00:21:37.209 "percentage": 50, 00:21:37.209 "status": "finished", 00:21:37.209 "queue_depth": 2, 00:21:37.209 "io_size": 3145728, 00:21:37.209 "runtime": 8.09826, 00:21:37.209 "iops": 82.7338218333321, 00:21:37.209 "mibps": 248.2014654999963, 00:21:37.209 "io_failed": 0, 00:21:37.209 "io_timeout": 0, 00:21:37.209 "avg_latency_us": 15953.847920224209, 00:21:37.209 "min_latency_us": 373.82707423580786, 00:21:37.209 "max_latency_us": 119968.08384279476 00:21:37.209 } 00:21:37.209 ], 00:21:37.209 "core_count": 1 00:21:37.209 } 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.209 23:02:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:37.468 /dev/nbd0 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.468 1+0 records in 00:21:37.468 1+0 records out 00:21:37.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559659 s, 7.3 MB/s 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.468 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:37.727 /dev/nbd1 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.727 1+0 records in 00:21:37.727 1+0 records out 00:21:37.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025457 s, 16.1 MB/s 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.727 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.985 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:38.247 23:02:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:38.507 /dev/nbd1 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:38.507 1+0 records in 00:21:38.507 1+0 records out 00:21:38.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005473 s, 7.5 MB/s 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:38.507 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:38.790 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:39.048 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:39.306 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:39.306 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:39.306 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:39.306 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 [2024-12-09 23:02:54.936574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:39.307 [2024-12-09 23:02:54.936666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.307 [2024-12-09 23:02:54.936694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:39.307 [2024-12-09 23:02:54.936708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.307 [2024-12-09 23:02:54.939324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.307 [2024-12-09 23:02:54.939378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:39.307 [2024-12-09 23:02:54.939518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:39.307 [2024-12-09 23:02:54.939585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:39.307 [2024-12-09 23:02:54.939766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.307 [2024-12-09 23:02:54.939897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.307 spare 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.307 23:02:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 [2024-12-09 23:02:55.039832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:39.307 [2024-12-09 23:02:55.039913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:39.307 [2024-12-09 23:02:55.040321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:21:39.307 [2024-12-09 23:02:55.040601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:39.307 [2024-12-09 23:02:55.040622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:39.307 [2024-12-09 23:02:55.040886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.307 "name": "raid_bdev1", 00:21:39.307 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:39.307 "strip_size_kb": 0, 00:21:39.307 "state": "online", 00:21:39.307 "raid_level": "raid1", 00:21:39.307 "superblock": true, 00:21:39.307 "num_base_bdevs": 4, 00:21:39.307 "num_base_bdevs_discovered": 3, 00:21:39.307 "num_base_bdevs_operational": 3, 00:21:39.307 "base_bdevs_list": [ 00:21:39.307 { 00:21:39.307 "name": "spare", 00:21:39.307 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:39.307 "is_configured": true, 00:21:39.307 "data_offset": 2048, 00:21:39.307 "data_size": 63488 00:21:39.307 }, 00:21:39.307 { 00:21:39.307 "name": null, 00:21:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.307 "is_configured": false, 00:21:39.307 "data_offset": 2048, 00:21:39.307 "data_size": 63488 00:21:39.307 }, 00:21:39.307 { 00:21:39.307 "name": "BaseBdev3", 00:21:39.307 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:39.307 "is_configured": true, 00:21:39.307 "data_offset": 2048, 00:21:39.307 "data_size": 63488 00:21:39.307 }, 00:21:39.307 { 00:21:39.307 "name": "BaseBdev4", 00:21:39.307 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:39.307 "is_configured": true, 00:21:39.307 "data_offset": 2048, 00:21:39.307 "data_size": 63488 00:21:39.307 } 00:21:39.307 ] 00:21:39.307 }' 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.307 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.876 "name": "raid_bdev1", 00:21:39.876 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:39.876 "strip_size_kb": 0, 00:21:39.876 "state": "online", 00:21:39.876 "raid_level": "raid1", 00:21:39.876 "superblock": true, 00:21:39.876 "num_base_bdevs": 4, 00:21:39.876 "num_base_bdevs_discovered": 3, 00:21:39.876 "num_base_bdevs_operational": 3, 00:21:39.876 "base_bdevs_list": [ 00:21:39.876 { 00:21:39.876 "name": "spare", 00:21:39.876 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:39.876 "is_configured": true, 00:21:39.876 "data_offset": 2048, 00:21:39.876 "data_size": 63488 00:21:39.876 }, 00:21:39.876 { 00:21:39.876 "name": null, 00:21:39.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.876 "is_configured": false, 00:21:39.876 "data_offset": 2048, 00:21:39.876 "data_size": 63488 00:21:39.876 }, 00:21:39.876 { 00:21:39.876 "name": "BaseBdev3", 00:21:39.876 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:39.876 "is_configured": true, 00:21:39.876 "data_offset": 2048, 00:21:39.876 "data_size": 63488 00:21:39.876 }, 00:21:39.876 { 00:21:39.876 "name": "BaseBdev4", 00:21:39.876 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:39.876 "is_configured": true, 00:21:39.876 "data_offset": 2048, 00:21:39.876 "data_size": 63488 00:21:39.876 } 00:21:39.876 ] 00:21:39.876 }' 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 [2024-12-09 23:02:55.636178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.876 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.876 "name": "raid_bdev1", 00:21:39.876 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:39.876 "strip_size_kb": 0, 00:21:39.876 "state": "online", 00:21:39.876 "raid_level": "raid1", 00:21:39.876 "superblock": true, 00:21:39.876 "num_base_bdevs": 4, 00:21:39.876 "num_base_bdevs_discovered": 2, 00:21:39.876 "num_base_bdevs_operational": 2, 00:21:39.876 "base_bdevs_list": [ 00:21:39.876 { 00:21:39.876 "name": null, 00:21:39.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.876 "is_configured": false, 00:21:39.876 "data_offset": 0, 00:21:39.876 "data_size": 63488 00:21:39.876 }, 00:21:39.876 { 00:21:39.876 "name": null, 00:21:39.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.876 "is_configured": false, 00:21:39.876 "data_offset": 2048, 00:21:39.876 "data_size": 63488 00:21:39.876 }, 00:21:39.876 { 00:21:39.876 "name": "BaseBdev3", 00:21:39.877 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:39.877 "is_configured": true, 00:21:39.877 "data_offset": 2048, 00:21:39.877 "data_size": 63488 00:21:39.877 }, 00:21:39.877 { 00:21:39.877 "name": "BaseBdev4", 00:21:39.877 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:39.877 "is_configured": true, 00:21:39.877 "data_offset": 2048, 00:21:39.877 "data_size": 63488 00:21:39.877 } 00:21:39.877 ] 00:21:39.877 }' 00:21:39.877 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.877 23:02:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.446 23:02:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:40.446 23:02:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.446 23:02:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.446 [2024-12-09 23:02:56.011657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:40.446 [2024-12-09 23:02:56.011889] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:40.446 [2024-12-09 23:02:56.011914] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:40.446 [2024-12-09 23:02:56.011957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:40.446 [2024-12-09 23:02:56.029305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:21:40.446 23:02:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.446 23:02:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:40.446 [2024-12-09 23:02:56.031590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.420 "name": "raid_bdev1", 00:21:41.420 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:41.420 "strip_size_kb": 0, 00:21:41.420 "state": "online", 00:21:41.420 "raid_level": "raid1", 00:21:41.420 "superblock": true, 00:21:41.420 "num_base_bdevs": 4, 00:21:41.420 "num_base_bdevs_discovered": 3, 00:21:41.420 "num_base_bdevs_operational": 3, 00:21:41.420 "process": { 00:21:41.420 "type": "rebuild", 00:21:41.420 "target": "spare", 00:21:41.420 "progress": { 00:21:41.420 "blocks": 20480, 00:21:41.420 "percent": 32 00:21:41.420 } 00:21:41.420 }, 00:21:41.420 "base_bdevs_list": [ 00:21:41.420 { 00:21:41.420 "name": "spare", 00:21:41.420 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:41.420 "is_configured": true, 00:21:41.420 "data_offset": 2048, 00:21:41.420 "data_size": 63488 00:21:41.420 }, 00:21:41.420 { 00:21:41.420 "name": null, 00:21:41.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.420 "is_configured": false, 00:21:41.420 "data_offset": 2048, 00:21:41.420 "data_size": 63488 00:21:41.420 }, 00:21:41.420 { 00:21:41.420 "name": "BaseBdev3", 00:21:41.420 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:41.420 "is_configured": true, 00:21:41.420 "data_offset": 2048, 00:21:41.420 "data_size": 63488 00:21:41.420 }, 00:21:41.420 { 00:21:41.420 "name": "BaseBdev4", 00:21:41.420 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:41.420 "is_configured": true, 00:21:41.420 "data_offset": 2048, 00:21:41.420 "data_size": 63488 00:21:41.420 } 00:21:41.420 ] 00:21:41.420 }' 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.420 [2024-12-09 23:02:57.174946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:41.420 [2024-12-09 23:02:57.237896] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:41.420 [2024-12-09 23:02:57.237979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.420 [2024-12-09 23:02:57.238000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:41.420 [2024-12-09 23:02:57.238014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.420 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.679 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.679 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.679 "name": "raid_bdev1", 00:21:41.679 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:41.679 "strip_size_kb": 0, 00:21:41.679 "state": "online", 00:21:41.679 "raid_level": "raid1", 00:21:41.679 "superblock": true, 00:21:41.679 "num_base_bdevs": 4, 00:21:41.679 "num_base_bdevs_discovered": 2, 00:21:41.679 "num_base_bdevs_operational": 2, 00:21:41.679 "base_bdevs_list": [ 00:21:41.679 { 00:21:41.679 "name": null, 00:21:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.679 "is_configured": false, 00:21:41.679 "data_offset": 0, 00:21:41.679 "data_size": 63488 00:21:41.679 }, 00:21:41.679 { 00:21:41.679 "name": null, 00:21:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.679 "is_configured": false, 00:21:41.679 "data_offset": 2048, 00:21:41.679 "data_size": 63488 00:21:41.679 }, 00:21:41.679 { 00:21:41.679 "name": "BaseBdev3", 00:21:41.679 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:41.679 "is_configured": true, 00:21:41.679 "data_offset": 2048, 00:21:41.679 "data_size": 63488 00:21:41.679 }, 00:21:41.679 { 00:21:41.679 "name": "BaseBdev4", 00:21:41.679 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:41.679 "is_configured": true, 00:21:41.679 "data_offset": 2048, 00:21:41.679 "data_size": 63488 00:21:41.679 } 00:21:41.679 ] 00:21:41.679 }' 00:21:41.679 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.679 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.938 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:41.938 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.938 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.938 [2024-12-09 23:02:57.668680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:41.938 [2024-12-09 23:02:57.668776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.938 [2024-12-09 23:02:57.668807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:41.938 [2024-12-09 23:02:57.668821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.938 [2024-12-09 23:02:57.669381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.938 [2024-12-09 23:02:57.669422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:41.938 [2024-12-09 23:02:57.669552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:41.938 [2024-12-09 23:02:57.669578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:41.938 [2024-12-09 23:02:57.669591] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:41.938 [2024-12-09 23:02:57.669622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:41.938 [2024-12-09 23:02:57.688525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:21:41.938 spare 00:21:41.938 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.938 23:02:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:41.938 [2024-12-09 23:02:57.690754] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.872 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.129 "name": "raid_bdev1", 00:21:43.129 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:43.129 "strip_size_kb": 0, 00:21:43.129 "state": "online", 00:21:43.129 "raid_level": "raid1", 00:21:43.129 "superblock": true, 00:21:43.129 "num_base_bdevs": 4, 00:21:43.129 "num_base_bdevs_discovered": 3, 00:21:43.129 "num_base_bdevs_operational": 3, 00:21:43.129 "process": { 00:21:43.129 "type": "rebuild", 00:21:43.129 "target": "spare", 00:21:43.129 "progress": { 00:21:43.129 "blocks": 20480, 00:21:43.129 "percent": 32 00:21:43.129 } 00:21:43.129 }, 00:21:43.129 "base_bdevs_list": [ 00:21:43.129 { 00:21:43.129 "name": "spare", 00:21:43.129 "uuid": "ecf4b79c-dd77-55f7-80eb-57986101553c", 00:21:43.129 "is_configured": true, 00:21:43.129 "data_offset": 2048, 00:21:43.129 "data_size": 63488 00:21:43.129 }, 00:21:43.129 { 00:21:43.129 "name": null, 00:21:43.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.129 "is_configured": false, 00:21:43.129 "data_offset": 2048, 00:21:43.129 "data_size": 63488 00:21:43.129 }, 00:21:43.129 { 00:21:43.129 "name": "BaseBdev3", 00:21:43.129 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:43.129 "is_configured": true, 00:21:43.129 "data_offset": 2048, 00:21:43.129 "data_size": 63488 00:21:43.129 }, 00:21:43.129 { 00:21:43.129 "name": "BaseBdev4", 00:21:43.129 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:43.129 "is_configured": true, 00:21:43.129 "data_offset": 2048, 00:21:43.129 "data_size": 63488 00:21:43.129 } 00:21:43.129 ] 00:21:43.129 }' 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.129 [2024-12-09 23:02:58.810236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:43.129 [2024-12-09 23:02:58.897180] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:43.129 [2024-12-09 23:02:58.897280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.129 [2024-12-09 23:02:58.897307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:43.129 [2024-12-09 23:02:58.897316] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.129 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.129 "name": "raid_bdev1", 00:21:43.129 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:43.129 "strip_size_kb": 0, 00:21:43.129 "state": "online", 00:21:43.129 "raid_level": "raid1", 00:21:43.129 "superblock": true, 00:21:43.129 "num_base_bdevs": 4, 00:21:43.129 "num_base_bdevs_discovered": 2, 00:21:43.129 "num_base_bdevs_operational": 2, 00:21:43.129 "base_bdevs_list": [ 00:21:43.129 { 00:21:43.129 "name": null, 00:21:43.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.130 "is_configured": false, 00:21:43.130 "data_offset": 0, 00:21:43.130 "data_size": 63488 00:21:43.130 }, 00:21:43.130 { 00:21:43.130 "name": null, 00:21:43.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.130 "is_configured": false, 00:21:43.130 "data_offset": 2048, 00:21:43.130 "data_size": 63488 00:21:43.130 }, 00:21:43.130 { 00:21:43.130 "name": "BaseBdev3", 00:21:43.130 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:43.130 "is_configured": true, 00:21:43.130 "data_offset": 2048, 00:21:43.130 "data_size": 63488 00:21:43.130 }, 00:21:43.130 { 00:21:43.130 "name": "BaseBdev4", 00:21:43.130 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:43.130 "is_configured": true, 00:21:43.130 "data_offset": 2048, 00:21:43.130 "data_size": 63488 00:21:43.130 } 00:21:43.130 ] 00:21:43.130 }' 00:21:43.130 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.130 23:02:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.698 "name": "raid_bdev1", 00:21:43.698 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:43.698 "strip_size_kb": 0, 00:21:43.698 "state": "online", 00:21:43.698 "raid_level": "raid1", 00:21:43.698 "superblock": true, 00:21:43.698 "num_base_bdevs": 4, 00:21:43.698 "num_base_bdevs_discovered": 2, 00:21:43.698 "num_base_bdevs_operational": 2, 00:21:43.698 "base_bdevs_list": [ 00:21:43.698 { 00:21:43.698 "name": null, 00:21:43.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.698 "is_configured": false, 00:21:43.698 "data_offset": 0, 00:21:43.698 "data_size": 63488 00:21:43.698 }, 00:21:43.698 { 00:21:43.698 "name": null, 00:21:43.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.698 "is_configured": false, 00:21:43.698 "data_offset": 2048, 00:21:43.698 "data_size": 63488 00:21:43.698 }, 00:21:43.698 { 00:21:43.698 "name": "BaseBdev3", 00:21:43.698 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:43.698 "is_configured": true, 00:21:43.698 "data_offset": 2048, 00:21:43.698 "data_size": 63488 00:21:43.698 }, 00:21:43.698 { 00:21:43.698 "name": "BaseBdev4", 00:21:43.698 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:43.698 "is_configured": true, 00:21:43.698 "data_offset": 2048, 00:21:43.698 "data_size": 63488 00:21:43.698 } 00:21:43.698 ] 00:21:43.698 }' 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.698 [2024-12-09 23:02:59.459471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:43.698 [2024-12-09 23:02:59.459541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.698 [2024-12-09 23:02:59.459568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:43.698 [2024-12-09 23:02:59.459580] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.698 [2024-12-09 23:02:59.460120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.698 [2024-12-09 23:02:59.460153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:43.698 [2024-12-09 23:02:59.460253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:43.698 [2024-12-09 23:02:59.460274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:43.698 [2024-12-09 23:02:59.460286] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:43.698 [2024-12-09 23:02:59.460301] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:43.698 BaseBdev1 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.698 23:02:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.637 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.895 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.895 "name": "raid_bdev1", 00:21:44.895 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:44.895 "strip_size_kb": 0, 00:21:44.895 "state": "online", 00:21:44.895 "raid_level": "raid1", 00:21:44.895 "superblock": true, 00:21:44.895 "num_base_bdevs": 4, 00:21:44.895 "num_base_bdevs_discovered": 2, 00:21:44.895 "num_base_bdevs_operational": 2, 00:21:44.895 "base_bdevs_list": [ 00:21:44.895 { 00:21:44.895 "name": null, 00:21:44.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.895 "is_configured": false, 00:21:44.895 "data_offset": 0, 00:21:44.895 "data_size": 63488 00:21:44.895 }, 00:21:44.895 { 00:21:44.895 "name": null, 00:21:44.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.895 "is_configured": false, 00:21:44.895 "data_offset": 2048, 00:21:44.895 "data_size": 63488 00:21:44.895 }, 00:21:44.895 { 00:21:44.895 "name": "BaseBdev3", 00:21:44.895 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:44.895 "is_configured": true, 00:21:44.895 "data_offset": 2048, 00:21:44.895 "data_size": 63488 00:21:44.895 }, 00:21:44.895 { 00:21:44.895 "name": "BaseBdev4", 00:21:44.895 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:44.895 "is_configured": true, 00:21:44.895 "data_offset": 2048, 00:21:44.895 "data_size": 63488 00:21:44.895 } 00:21:44.895 ] 00:21:44.895 }' 00:21:44.895 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.895 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.154 "name": "raid_bdev1", 00:21:45.154 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:45.154 "strip_size_kb": 0, 00:21:45.154 "state": "online", 00:21:45.154 "raid_level": "raid1", 00:21:45.154 "superblock": true, 00:21:45.154 "num_base_bdevs": 4, 00:21:45.154 "num_base_bdevs_discovered": 2, 00:21:45.154 "num_base_bdevs_operational": 2, 00:21:45.154 "base_bdevs_list": [ 00:21:45.154 { 00:21:45.154 "name": null, 00:21:45.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.154 "is_configured": false, 00:21:45.154 "data_offset": 0, 00:21:45.154 "data_size": 63488 00:21:45.154 }, 00:21:45.154 { 00:21:45.154 "name": null, 00:21:45.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.154 "is_configured": false, 00:21:45.154 "data_offset": 2048, 00:21:45.154 "data_size": 63488 00:21:45.154 }, 00:21:45.154 { 00:21:45.154 "name": "BaseBdev3", 00:21:45.154 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:45.154 "is_configured": true, 00:21:45.154 "data_offset": 2048, 00:21:45.154 "data_size": 63488 00:21:45.154 }, 00:21:45.154 { 00:21:45.154 "name": "BaseBdev4", 00:21:45.154 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:45.154 "is_configured": true, 00:21:45.154 "data_offset": 2048, 00:21:45.154 "data_size": 63488 00:21:45.154 } 00:21:45.154 ] 00:21:45.154 }' 00:21:45.154 23:03:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.412 [2024-12-09 23:03:01.073253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.412 [2024-12-09 23:03:01.073548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:45.412 [2024-12-09 23:03:01.073577] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:45.412 request: 00:21:45.412 { 00:21:45.412 "base_bdev": "BaseBdev1", 00:21:45.412 "raid_bdev": "raid_bdev1", 00:21:45.412 "method": "bdev_raid_add_base_bdev", 00:21:45.412 "req_id": 1 00:21:45.412 } 00:21:45.412 Got JSON-RPC error response 00:21:45.412 response: 00:21:45.412 { 00:21:45.412 "code": -22, 00:21:45.412 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:45.412 } 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:45.412 23:03:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.350 "name": "raid_bdev1", 00:21:46.350 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:46.350 "strip_size_kb": 0, 00:21:46.350 "state": "online", 00:21:46.350 "raid_level": "raid1", 00:21:46.350 "superblock": true, 00:21:46.350 "num_base_bdevs": 4, 00:21:46.350 "num_base_bdevs_discovered": 2, 00:21:46.350 "num_base_bdevs_operational": 2, 00:21:46.350 "base_bdevs_list": [ 00:21:46.350 { 00:21:46.350 "name": null, 00:21:46.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.350 "is_configured": false, 00:21:46.350 "data_offset": 0, 00:21:46.350 "data_size": 63488 00:21:46.350 }, 00:21:46.350 { 00:21:46.350 "name": null, 00:21:46.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.350 "is_configured": false, 00:21:46.350 "data_offset": 2048, 00:21:46.350 "data_size": 63488 00:21:46.350 }, 00:21:46.350 { 00:21:46.350 "name": "BaseBdev3", 00:21:46.350 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:46.350 "is_configured": true, 00:21:46.350 "data_offset": 2048, 00:21:46.350 "data_size": 63488 00:21:46.350 }, 00:21:46.350 { 00:21:46.350 "name": "BaseBdev4", 00:21:46.350 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:46.350 "is_configured": true, 00:21:46.350 "data_offset": 2048, 00:21:46.350 "data_size": 63488 00:21:46.350 } 00:21:46.350 ] 00:21:46.350 }' 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.350 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.918 "name": "raid_bdev1", 00:21:46.918 "uuid": "3925d507-29d4-4b74-88dd-856491194936", 00:21:46.918 "strip_size_kb": 0, 00:21:46.918 "state": "online", 00:21:46.918 "raid_level": "raid1", 00:21:46.918 "superblock": true, 00:21:46.918 "num_base_bdevs": 4, 00:21:46.918 "num_base_bdevs_discovered": 2, 00:21:46.918 "num_base_bdevs_operational": 2, 00:21:46.918 "base_bdevs_list": [ 00:21:46.918 { 00:21:46.918 "name": null, 00:21:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.918 "is_configured": false, 00:21:46.918 "data_offset": 0, 00:21:46.918 "data_size": 63488 00:21:46.918 }, 00:21:46.918 { 00:21:46.918 "name": null, 00:21:46.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.918 "is_configured": false, 00:21:46.918 "data_offset": 2048, 00:21:46.918 "data_size": 63488 00:21:46.918 }, 00:21:46.918 { 00:21:46.918 "name": "BaseBdev3", 00:21:46.918 "uuid": "865ba096-6b41-5301-b0ec-282abd478b43", 00:21:46.918 "is_configured": true, 00:21:46.918 "data_offset": 2048, 00:21:46.918 "data_size": 63488 00:21:46.918 }, 00:21:46.918 { 00:21:46.918 "name": "BaseBdev4", 00:21:46.918 "uuid": "d74d3a61-640b-5063-bcb7-e45fa4972ad9", 00:21:46.918 "is_configured": true, 00:21:46.918 "data_offset": 2048, 00:21:46.918 "data_size": 63488 00:21:46.918 } 00:21:46.918 ] 00:21:46.918 }' 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79823 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79823 ']' 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79823 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79823 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.918 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.918 killing process with pid 79823 00:21:46.919 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79823' 00:21:46.919 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79823 00:21:46.919 Received shutdown signal, test time was about 17.988622 seconds 00:21:46.919 00:21:46.919 Latency(us) 00:21:46.919 [2024-12-09T23:03:02.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.919 [2024-12-09T23:03:02.775Z] =================================================================================================================== 00:21:46.919 [2024-12-09T23:03:02.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.919 [2024-12-09 23:03:02.759669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.919 23:03:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79823 00:21:46.919 [2024-12-09 23:03:02.759869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.919 [2024-12-09 23:03:02.759962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.919 [2024-12-09 23:03:02.759981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:47.516 [2024-12-09 23:03:03.257056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.918 23:03:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:48.918 00:21:48.918 real 0m21.911s 00:21:48.918 user 0m28.363s 00:21:48.918 sys 0m2.392s 00:21:48.918 23:03:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.918 23:03:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.918 ************************************ 00:21:48.918 END TEST raid_rebuild_test_sb_io 00:21:48.918 ************************************ 00:21:48.918 23:03:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:48.918 23:03:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:48.918 23:03:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:48.918 23:03:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.918 23:03:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.918 ************************************ 00:21:48.918 START TEST raid5f_state_function_test 00:21:48.918 ************************************ 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80556 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80556' 00:21:48.918 Process raid pid: 80556 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80556 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80556 ']' 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.918 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.176 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.176 23:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.176 23:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:49.176 [2024-12-09 23:03:04.870495] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:21:49.176 [2024-12-09 23:03:04.870684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.437 [2024-12-09 23:03:05.063115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.437 [2024-12-09 23:03:05.224092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.697 [2024-12-09 23:03:05.501055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.697 [2024-12-09 23:03:05.501152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 [2024-12-09 23:03:05.790136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:49.957 [2024-12-09 23:03:05.790222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:49.957 [2024-12-09 23:03:05.790236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.957 [2024-12-09 23:03:05.790249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.957 [2024-12-09 23:03:05.790258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:49.957 [2024-12-09 23:03:05.790270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.957 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.216 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.216 "name": "Existed_Raid", 00:21:50.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.216 "strip_size_kb": 64, 00:21:50.216 "state": "configuring", 00:21:50.216 "raid_level": "raid5f", 00:21:50.216 "superblock": false, 00:21:50.216 "num_base_bdevs": 3, 00:21:50.216 "num_base_bdevs_discovered": 0, 00:21:50.216 "num_base_bdevs_operational": 3, 00:21:50.216 "base_bdevs_list": [ 00:21:50.216 { 00:21:50.216 "name": "BaseBdev1", 00:21:50.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.216 "is_configured": false, 00:21:50.216 "data_offset": 0, 00:21:50.216 "data_size": 0 00:21:50.216 }, 00:21:50.216 { 00:21:50.216 "name": "BaseBdev2", 00:21:50.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.216 "is_configured": false, 00:21:50.216 "data_offset": 0, 00:21:50.216 "data_size": 0 00:21:50.216 }, 00:21:50.216 { 00:21:50.216 "name": "BaseBdev3", 00:21:50.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.216 "is_configured": false, 00:21:50.216 "data_offset": 0, 00:21:50.216 "data_size": 0 00:21:50.216 } 00:21:50.216 ] 00:21:50.216 }' 00:21:50.216 23:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.216 23:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 [2024-12-09 23:03:06.217450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.474 [2024-12-09 23:03:06.217539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 [2024-12-09 23:03:06.225419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:50.474 [2024-12-09 23:03:06.225519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:50.474 [2024-12-09 23:03:06.225532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.474 [2024-12-09 23:03:06.225544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.474 [2024-12-09 23:03:06.225552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:50.474 [2024-12-09 23:03:06.225563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 [2024-12-09 23:03:06.289434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.474 BaseBdev1 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.474 [ 00:21:50.474 { 00:21:50.474 "name": "BaseBdev1", 00:21:50.474 "aliases": [ 00:21:50.474 "67372fb8-d855-4322-ba4a-73d4ba2396e4" 00:21:50.474 ], 00:21:50.474 "product_name": "Malloc disk", 00:21:50.474 "block_size": 512, 00:21:50.474 "num_blocks": 65536, 00:21:50.474 "uuid": "67372fb8-d855-4322-ba4a-73d4ba2396e4", 00:21:50.474 "assigned_rate_limits": { 00:21:50.474 "rw_ios_per_sec": 0, 00:21:50.474 "rw_mbytes_per_sec": 0, 00:21:50.474 "r_mbytes_per_sec": 0, 00:21:50.474 "w_mbytes_per_sec": 0 00:21:50.474 }, 00:21:50.474 "claimed": true, 00:21:50.474 "claim_type": "exclusive_write", 00:21:50.474 "zoned": false, 00:21:50.474 "supported_io_types": { 00:21:50.474 "read": true, 00:21:50.474 "write": true, 00:21:50.474 "unmap": true, 00:21:50.474 "flush": true, 00:21:50.474 "reset": true, 00:21:50.474 "nvme_admin": false, 00:21:50.474 "nvme_io": false, 00:21:50.474 "nvme_io_md": false, 00:21:50.474 "write_zeroes": true, 00:21:50.474 "zcopy": true, 00:21:50.474 "get_zone_info": false, 00:21:50.474 "zone_management": false, 00:21:50.474 "zone_append": false, 00:21:50.474 "compare": false, 00:21:50.474 "compare_and_write": false, 00:21:50.474 "abort": true, 00:21:50.474 "seek_hole": false, 00:21:50.474 "seek_data": false, 00:21:50.474 "copy": true, 00:21:50.474 "nvme_iov_md": false 00:21:50.474 }, 00:21:50.474 "memory_domains": [ 00:21:50.474 { 00:21:50.474 "dma_device_id": "system", 00:21:50.474 "dma_device_type": 1 00:21:50.474 }, 00:21:50.474 { 00:21:50.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.474 "dma_device_type": 2 00:21:50.474 } 00:21:50.474 ], 00:21:50.474 "driver_specific": {} 00:21:50.474 } 00:21:50.474 ] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.474 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.733 "name": "Existed_Raid", 00:21:50.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.733 "strip_size_kb": 64, 00:21:50.733 "state": "configuring", 00:21:50.733 "raid_level": "raid5f", 00:21:50.733 "superblock": false, 00:21:50.733 "num_base_bdevs": 3, 00:21:50.733 "num_base_bdevs_discovered": 1, 00:21:50.733 "num_base_bdevs_operational": 3, 00:21:50.733 "base_bdevs_list": [ 00:21:50.733 { 00:21:50.733 "name": "BaseBdev1", 00:21:50.733 "uuid": "67372fb8-d855-4322-ba4a-73d4ba2396e4", 00:21:50.733 "is_configured": true, 00:21:50.733 "data_offset": 0, 00:21:50.733 "data_size": 65536 00:21:50.733 }, 00:21:50.733 { 00:21:50.733 "name": "BaseBdev2", 00:21:50.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.733 "is_configured": false, 00:21:50.733 "data_offset": 0, 00:21:50.733 "data_size": 0 00:21:50.733 }, 00:21:50.733 { 00:21:50.733 "name": "BaseBdev3", 00:21:50.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.733 "is_configured": false, 00:21:50.733 "data_offset": 0, 00:21:50.733 "data_size": 0 00:21:50.733 } 00:21:50.733 ] 00:21:50.733 }' 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.733 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.991 [2024-12-09 23:03:06.752781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.991 [2024-12-09 23:03:06.752877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.991 [2024-12-09 23:03:06.760845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.991 [2024-12-09 23:03:06.763446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.991 [2024-12-09 23:03:06.763528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.991 [2024-12-09 23:03:06.763543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:50.991 [2024-12-09 23:03:06.763556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.991 "name": "Existed_Raid", 00:21:50.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.991 "strip_size_kb": 64, 00:21:50.991 "state": "configuring", 00:21:50.991 "raid_level": "raid5f", 00:21:50.991 "superblock": false, 00:21:50.991 "num_base_bdevs": 3, 00:21:50.991 "num_base_bdevs_discovered": 1, 00:21:50.991 "num_base_bdevs_operational": 3, 00:21:50.991 "base_bdevs_list": [ 00:21:50.991 { 00:21:50.991 "name": "BaseBdev1", 00:21:50.991 "uuid": "67372fb8-d855-4322-ba4a-73d4ba2396e4", 00:21:50.991 "is_configured": true, 00:21:50.991 "data_offset": 0, 00:21:50.991 "data_size": 65536 00:21:50.991 }, 00:21:50.991 { 00:21:50.991 "name": "BaseBdev2", 00:21:50.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.991 "is_configured": false, 00:21:50.991 "data_offset": 0, 00:21:50.991 "data_size": 0 00:21:50.991 }, 00:21:50.991 { 00:21:50.991 "name": "BaseBdev3", 00:21:50.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.991 "is_configured": false, 00:21:50.991 "data_offset": 0, 00:21:50.991 "data_size": 0 00:21:50.991 } 00:21:50.991 ] 00:21:50.991 }' 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.991 23:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 [2024-12-09 23:03:07.253577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.557 BaseBdev2 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.557 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.557 [ 00:21:51.557 { 00:21:51.557 "name": "BaseBdev2", 00:21:51.557 "aliases": [ 00:21:51.557 "d64c98e9-28b9-4bc7-bbef-442d229a4e92" 00:21:51.557 ], 00:21:51.557 "product_name": "Malloc disk", 00:21:51.557 "block_size": 512, 00:21:51.557 "num_blocks": 65536, 00:21:51.557 "uuid": "d64c98e9-28b9-4bc7-bbef-442d229a4e92", 00:21:51.557 "assigned_rate_limits": { 00:21:51.558 "rw_ios_per_sec": 0, 00:21:51.558 "rw_mbytes_per_sec": 0, 00:21:51.558 "r_mbytes_per_sec": 0, 00:21:51.558 "w_mbytes_per_sec": 0 00:21:51.558 }, 00:21:51.558 "claimed": true, 00:21:51.558 "claim_type": "exclusive_write", 00:21:51.558 "zoned": false, 00:21:51.558 "supported_io_types": { 00:21:51.558 "read": true, 00:21:51.558 "write": true, 00:21:51.558 "unmap": true, 00:21:51.558 "flush": true, 00:21:51.558 "reset": true, 00:21:51.558 "nvme_admin": false, 00:21:51.558 "nvme_io": false, 00:21:51.558 "nvme_io_md": false, 00:21:51.558 "write_zeroes": true, 00:21:51.558 "zcopy": true, 00:21:51.558 "get_zone_info": false, 00:21:51.558 "zone_management": false, 00:21:51.558 "zone_append": false, 00:21:51.558 "compare": false, 00:21:51.558 "compare_and_write": false, 00:21:51.558 "abort": true, 00:21:51.558 "seek_hole": false, 00:21:51.558 "seek_data": false, 00:21:51.558 "copy": true, 00:21:51.558 "nvme_iov_md": false 00:21:51.558 }, 00:21:51.558 "memory_domains": [ 00:21:51.558 { 00:21:51.558 "dma_device_id": "system", 00:21:51.558 "dma_device_type": 1 00:21:51.558 }, 00:21:51.558 { 00:21:51.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.558 "dma_device_type": 2 00:21:51.558 } 00:21:51.558 ], 00:21:51.558 "driver_specific": {} 00:21:51.558 } 00:21:51.558 ] 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.558 "name": "Existed_Raid", 00:21:51.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.558 "strip_size_kb": 64, 00:21:51.558 "state": "configuring", 00:21:51.558 "raid_level": "raid5f", 00:21:51.558 "superblock": false, 00:21:51.558 "num_base_bdevs": 3, 00:21:51.558 "num_base_bdevs_discovered": 2, 00:21:51.558 "num_base_bdevs_operational": 3, 00:21:51.558 "base_bdevs_list": [ 00:21:51.558 { 00:21:51.558 "name": "BaseBdev1", 00:21:51.558 "uuid": "67372fb8-d855-4322-ba4a-73d4ba2396e4", 00:21:51.558 "is_configured": true, 00:21:51.558 "data_offset": 0, 00:21:51.558 "data_size": 65536 00:21:51.558 }, 00:21:51.558 { 00:21:51.558 "name": "BaseBdev2", 00:21:51.558 "uuid": "d64c98e9-28b9-4bc7-bbef-442d229a4e92", 00:21:51.558 "is_configured": true, 00:21:51.558 "data_offset": 0, 00:21:51.558 "data_size": 65536 00:21:51.558 }, 00:21:51.558 { 00:21:51.558 "name": "BaseBdev3", 00:21:51.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.558 "is_configured": false, 00:21:51.558 "data_offset": 0, 00:21:51.558 "data_size": 0 00:21:51.558 } 00:21:51.558 ] 00:21:51.558 }' 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.558 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 [2024-12-09 23:03:07.746438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:52.125 [2024-12-09 23:03:07.746577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:52.125 [2024-12-09 23:03:07.746601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:52.125 [2024-12-09 23:03:07.746970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:52.125 [2024-12-09 23:03:07.753838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:52.125 [2024-12-09 23:03:07.753870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:52.125 [2024-12-09 23:03:07.754296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.125 BaseBdev3 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:52.125 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.126 [ 00:21:52.126 { 00:21:52.126 "name": "BaseBdev3", 00:21:52.126 "aliases": [ 00:21:52.126 "30884ae5-ecb3-40a9-aa0a-f7957f3925be" 00:21:52.126 ], 00:21:52.126 "product_name": "Malloc disk", 00:21:52.126 "block_size": 512, 00:21:52.126 "num_blocks": 65536, 00:21:52.126 "uuid": "30884ae5-ecb3-40a9-aa0a-f7957f3925be", 00:21:52.126 "assigned_rate_limits": { 00:21:52.126 "rw_ios_per_sec": 0, 00:21:52.126 "rw_mbytes_per_sec": 0, 00:21:52.126 "r_mbytes_per_sec": 0, 00:21:52.126 "w_mbytes_per_sec": 0 00:21:52.126 }, 00:21:52.126 "claimed": true, 00:21:52.126 "claim_type": "exclusive_write", 00:21:52.126 "zoned": false, 00:21:52.126 "supported_io_types": { 00:21:52.126 "read": true, 00:21:52.126 "write": true, 00:21:52.126 "unmap": true, 00:21:52.126 "flush": true, 00:21:52.126 "reset": true, 00:21:52.126 "nvme_admin": false, 00:21:52.126 "nvme_io": false, 00:21:52.126 "nvme_io_md": false, 00:21:52.126 "write_zeroes": true, 00:21:52.126 "zcopy": true, 00:21:52.126 "get_zone_info": false, 00:21:52.126 "zone_management": false, 00:21:52.126 "zone_append": false, 00:21:52.126 "compare": false, 00:21:52.126 "compare_and_write": false, 00:21:52.126 "abort": true, 00:21:52.126 "seek_hole": false, 00:21:52.126 "seek_data": false, 00:21:52.126 "copy": true, 00:21:52.126 "nvme_iov_md": false 00:21:52.126 }, 00:21:52.126 "memory_domains": [ 00:21:52.126 { 00:21:52.126 "dma_device_id": "system", 00:21:52.126 "dma_device_type": 1 00:21:52.126 }, 00:21:52.126 { 00:21:52.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.126 "dma_device_type": 2 00:21:52.126 } 00:21:52.126 ], 00:21:52.126 "driver_specific": {} 00:21:52.126 } 00:21:52.126 ] 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.126 "name": "Existed_Raid", 00:21:52.126 "uuid": "100fb5b4-34b2-4a5a-9197-8a692fc3e743", 00:21:52.126 "strip_size_kb": 64, 00:21:52.126 "state": "online", 00:21:52.126 "raid_level": "raid5f", 00:21:52.126 "superblock": false, 00:21:52.126 "num_base_bdevs": 3, 00:21:52.126 "num_base_bdevs_discovered": 3, 00:21:52.126 "num_base_bdevs_operational": 3, 00:21:52.126 "base_bdevs_list": [ 00:21:52.126 { 00:21:52.126 "name": "BaseBdev1", 00:21:52.126 "uuid": "67372fb8-d855-4322-ba4a-73d4ba2396e4", 00:21:52.126 "is_configured": true, 00:21:52.126 "data_offset": 0, 00:21:52.126 "data_size": 65536 00:21:52.126 }, 00:21:52.126 { 00:21:52.126 "name": "BaseBdev2", 00:21:52.126 "uuid": "d64c98e9-28b9-4bc7-bbef-442d229a4e92", 00:21:52.126 "is_configured": true, 00:21:52.126 "data_offset": 0, 00:21:52.126 "data_size": 65536 00:21:52.126 }, 00:21:52.126 { 00:21:52.126 "name": "BaseBdev3", 00:21:52.126 "uuid": "30884ae5-ecb3-40a9-aa0a-f7957f3925be", 00:21:52.126 "is_configured": true, 00:21:52.126 "data_offset": 0, 00:21:52.126 "data_size": 65536 00:21:52.126 } 00:21:52.126 ] 00:21:52.126 }' 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.126 23:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:52.386 [2024-12-09 23:03:08.218696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.386 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:52.645 "name": "Existed_Raid", 00:21:52.645 "aliases": [ 00:21:52.645 "100fb5b4-34b2-4a5a-9197-8a692fc3e743" 00:21:52.645 ], 00:21:52.645 "product_name": "Raid Volume", 00:21:52.645 "block_size": 512, 00:21:52.645 "num_blocks": 131072, 00:21:52.645 "uuid": "100fb5b4-34b2-4a5a-9197-8a692fc3e743", 00:21:52.645 "assigned_rate_limits": { 00:21:52.645 "rw_ios_per_sec": 0, 00:21:52.645 "rw_mbytes_per_sec": 0, 00:21:52.645 "r_mbytes_per_sec": 0, 00:21:52.645 "w_mbytes_per_sec": 0 00:21:52.645 }, 00:21:52.645 "claimed": false, 00:21:52.645 "zoned": false, 00:21:52.645 "supported_io_types": { 00:21:52.645 "read": true, 00:21:52.645 "write": true, 00:21:52.645 "unmap": false, 00:21:52.645 "flush": false, 00:21:52.645 "reset": true, 00:21:52.645 "nvme_admin": false, 00:21:52.645 "nvme_io": false, 00:21:52.645 "nvme_io_md": false, 00:21:52.645 "write_zeroes": true, 00:21:52.645 "zcopy": false, 00:21:52.645 "get_zone_info": false, 00:21:52.645 "zone_management": false, 00:21:52.645 "zone_append": false, 00:21:52.645 "compare": false, 00:21:52.645 "compare_and_write": false, 00:21:52.645 "abort": false, 00:21:52.645 "seek_hole": false, 00:21:52.645 "seek_data": false, 00:21:52.645 "copy": false, 00:21:52.645 "nvme_iov_md": false 00:21:52.645 }, 00:21:52.645 "driver_specific": { 00:21:52.645 "raid": { 00:21:52.645 "uuid": "100fb5b4-34b2-4a5a-9197-8a692fc3e743", 00:21:52.645 "strip_size_kb": 64, 00:21:52.645 "state": "online", 00:21:52.645 "raid_level": "raid5f", 00:21:52.645 "superblock": false, 00:21:52.645 "num_base_bdevs": 3, 00:21:52.645 "num_base_bdevs_discovered": 3, 00:21:52.645 "num_base_bdevs_operational": 3, 00:21:52.645 "base_bdevs_list": [ 00:21:52.645 { 00:21:52.645 "name": "BaseBdev1", 00:21:52.645 "uuid": "67372fb8-d855-4322-ba4a-73d4ba2396e4", 00:21:52.645 "is_configured": true, 00:21:52.645 "data_offset": 0, 00:21:52.645 "data_size": 65536 00:21:52.645 }, 00:21:52.645 { 00:21:52.645 "name": "BaseBdev2", 00:21:52.645 "uuid": "d64c98e9-28b9-4bc7-bbef-442d229a4e92", 00:21:52.645 "is_configured": true, 00:21:52.645 "data_offset": 0, 00:21:52.645 "data_size": 65536 00:21:52.645 }, 00:21:52.645 { 00:21:52.645 "name": "BaseBdev3", 00:21:52.645 "uuid": "30884ae5-ecb3-40a9-aa0a-f7957f3925be", 00:21:52.645 "is_configured": true, 00:21:52.645 "data_offset": 0, 00:21:52.645 "data_size": 65536 00:21:52.645 } 00:21:52.645 ] 00:21:52.645 } 00:21:52.645 } 00:21:52.645 }' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:52.645 BaseBdev2 00:21:52.645 BaseBdev3' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.645 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.645 [2024-12-09 23:03:08.465996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.911 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.912 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.912 "name": "Existed_Raid", 00:21:52.912 "uuid": "100fb5b4-34b2-4a5a-9197-8a692fc3e743", 00:21:52.912 "strip_size_kb": 64, 00:21:52.912 "state": "online", 00:21:52.912 "raid_level": "raid5f", 00:21:52.912 "superblock": false, 00:21:52.912 "num_base_bdevs": 3, 00:21:52.912 "num_base_bdevs_discovered": 2, 00:21:52.913 "num_base_bdevs_operational": 2, 00:21:52.913 "base_bdevs_list": [ 00:21:52.913 { 00:21:52.913 "name": null, 00:21:52.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.913 "is_configured": false, 00:21:52.913 "data_offset": 0, 00:21:52.913 "data_size": 65536 00:21:52.913 }, 00:21:52.913 { 00:21:52.913 "name": "BaseBdev2", 00:21:52.913 "uuid": "d64c98e9-28b9-4bc7-bbef-442d229a4e92", 00:21:52.913 "is_configured": true, 00:21:52.913 "data_offset": 0, 00:21:52.913 "data_size": 65536 00:21:52.913 }, 00:21:52.913 { 00:21:52.913 "name": "BaseBdev3", 00:21:52.913 "uuid": "30884ae5-ecb3-40a9-aa0a-f7957f3925be", 00:21:52.913 "is_configured": true, 00:21:52.913 "data_offset": 0, 00:21:52.913 "data_size": 65536 00:21:52.913 } 00:21:52.913 ] 00:21:52.913 }' 00:21:52.913 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.913 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:53.234 23:03:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.234 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:53.234 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.234 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:53.234 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.234 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.234 [2024-12-09 23:03:09.024734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:53.234 [2024-12-09 23:03:09.024871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.498 [2024-12-09 23:03:09.142325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.498 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.498 [2024-12-09 23:03:09.186318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:53.499 [2024-12-09 23:03:09.186393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.499 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.757 BaseBdev2 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.757 [ 00:21:53.757 { 00:21:53.757 "name": "BaseBdev2", 00:21:53.757 "aliases": [ 00:21:53.757 "97d9e913-aefe-4ada-88ec-e569e752b37e" 00:21:53.757 ], 00:21:53.757 "product_name": "Malloc disk", 00:21:53.757 "block_size": 512, 00:21:53.757 "num_blocks": 65536, 00:21:53.757 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:53.757 "assigned_rate_limits": { 00:21:53.757 "rw_ios_per_sec": 0, 00:21:53.757 "rw_mbytes_per_sec": 0, 00:21:53.757 "r_mbytes_per_sec": 0, 00:21:53.757 "w_mbytes_per_sec": 0 00:21:53.757 }, 00:21:53.757 "claimed": false, 00:21:53.757 "zoned": false, 00:21:53.757 "supported_io_types": { 00:21:53.757 "read": true, 00:21:53.757 "write": true, 00:21:53.757 "unmap": true, 00:21:53.757 "flush": true, 00:21:53.757 "reset": true, 00:21:53.757 "nvme_admin": false, 00:21:53.757 "nvme_io": false, 00:21:53.757 "nvme_io_md": false, 00:21:53.757 "write_zeroes": true, 00:21:53.757 "zcopy": true, 00:21:53.757 "get_zone_info": false, 00:21:53.757 "zone_management": false, 00:21:53.757 "zone_append": false, 00:21:53.757 "compare": false, 00:21:53.757 "compare_and_write": false, 00:21:53.757 "abort": true, 00:21:53.757 "seek_hole": false, 00:21:53.757 "seek_data": false, 00:21:53.757 "copy": true, 00:21:53.757 "nvme_iov_md": false 00:21:53.757 }, 00:21:53.757 "memory_domains": [ 00:21:53.757 { 00:21:53.757 "dma_device_id": "system", 00:21:53.757 "dma_device_type": 1 00:21:53.757 }, 00:21:53.757 { 00:21:53.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.757 "dma_device_type": 2 00:21:53.757 } 00:21:53.757 ], 00:21:53.757 "driver_specific": {} 00:21:53.757 } 00:21:53.757 ] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.757 BaseBdev3 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.757 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.757 [ 00:21:53.757 { 00:21:53.757 "name": "BaseBdev3", 00:21:53.757 "aliases": [ 00:21:53.757 "f154c137-708a-4d10-8c6b-f01677555b37" 00:21:53.757 ], 00:21:53.757 "product_name": "Malloc disk", 00:21:53.757 "block_size": 512, 00:21:53.757 "num_blocks": 65536, 00:21:53.757 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:53.757 "assigned_rate_limits": { 00:21:53.757 "rw_ios_per_sec": 0, 00:21:53.757 "rw_mbytes_per_sec": 0, 00:21:53.757 "r_mbytes_per_sec": 0, 00:21:53.757 "w_mbytes_per_sec": 0 00:21:53.757 }, 00:21:53.757 "claimed": false, 00:21:53.757 "zoned": false, 00:21:53.757 "supported_io_types": { 00:21:53.757 "read": true, 00:21:53.757 "write": true, 00:21:53.757 "unmap": true, 00:21:53.757 "flush": true, 00:21:53.757 "reset": true, 00:21:53.757 "nvme_admin": false, 00:21:53.757 "nvme_io": false, 00:21:53.757 "nvme_io_md": false, 00:21:53.757 "write_zeroes": true, 00:21:53.757 "zcopy": true, 00:21:53.757 "get_zone_info": false, 00:21:53.757 "zone_management": false, 00:21:53.758 "zone_append": false, 00:21:53.758 "compare": false, 00:21:53.758 "compare_and_write": false, 00:21:53.758 "abort": true, 00:21:53.758 "seek_hole": false, 00:21:53.758 "seek_data": false, 00:21:53.758 "copy": true, 00:21:53.758 "nvme_iov_md": false 00:21:53.758 }, 00:21:53.758 "memory_domains": [ 00:21:53.758 { 00:21:53.758 "dma_device_id": "system", 00:21:53.758 "dma_device_type": 1 00:21:53.758 }, 00:21:53.758 { 00:21:53.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.758 "dma_device_type": 2 00:21:53.758 } 00:21:53.758 ], 00:21:53.758 "driver_specific": {} 00:21:53.758 } 00:21:53.758 ] 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.758 [2024-12-09 23:03:09.506676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.758 [2024-12-09 23:03:09.506737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.758 [2024-12-09 23:03:09.506770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:53.758 [2024-12-09 23:03:09.509008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.758 "name": "Existed_Raid", 00:21:53.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.758 "strip_size_kb": 64, 00:21:53.758 "state": "configuring", 00:21:53.758 "raid_level": "raid5f", 00:21:53.758 "superblock": false, 00:21:53.758 "num_base_bdevs": 3, 00:21:53.758 "num_base_bdevs_discovered": 2, 00:21:53.758 "num_base_bdevs_operational": 3, 00:21:53.758 "base_bdevs_list": [ 00:21:53.758 { 00:21:53.758 "name": "BaseBdev1", 00:21:53.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.758 "is_configured": false, 00:21:53.758 "data_offset": 0, 00:21:53.758 "data_size": 0 00:21:53.758 }, 00:21:53.758 { 00:21:53.758 "name": "BaseBdev2", 00:21:53.758 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:53.758 "is_configured": true, 00:21:53.758 "data_offset": 0, 00:21:53.758 "data_size": 65536 00:21:53.758 }, 00:21:53.758 { 00:21:53.758 "name": "BaseBdev3", 00:21:53.758 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:53.758 "is_configured": true, 00:21:53.758 "data_offset": 0, 00:21:53.758 "data_size": 65536 00:21:53.758 } 00:21:53.758 ] 00:21:53.758 }' 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.758 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.324 [2024-12-09 23:03:09.958120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.324 23:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.324 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.324 "name": "Existed_Raid", 00:21:54.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.324 "strip_size_kb": 64, 00:21:54.324 "state": "configuring", 00:21:54.324 "raid_level": "raid5f", 00:21:54.324 "superblock": false, 00:21:54.324 "num_base_bdevs": 3, 00:21:54.324 "num_base_bdevs_discovered": 1, 00:21:54.324 "num_base_bdevs_operational": 3, 00:21:54.324 "base_bdevs_list": [ 00:21:54.324 { 00:21:54.324 "name": "BaseBdev1", 00:21:54.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.324 "is_configured": false, 00:21:54.324 "data_offset": 0, 00:21:54.324 "data_size": 0 00:21:54.324 }, 00:21:54.324 { 00:21:54.324 "name": null, 00:21:54.324 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:54.324 "is_configured": false, 00:21:54.324 "data_offset": 0, 00:21:54.324 "data_size": 65536 00:21:54.324 }, 00:21:54.324 { 00:21:54.324 "name": "BaseBdev3", 00:21:54.324 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:54.324 "is_configured": true, 00:21:54.324 "data_offset": 0, 00:21:54.324 "data_size": 65536 00:21:54.324 } 00:21:54.324 ] 00:21:54.324 }' 00:21:54.324 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.324 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.582 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.840 [2024-12-09 23:03:10.461870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.840 BaseBdev1 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.840 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.840 [ 00:21:54.840 { 00:21:54.840 "name": "BaseBdev1", 00:21:54.840 "aliases": [ 00:21:54.840 "fd0854ed-7172-4b9a-81fb-38ab94b29696" 00:21:54.840 ], 00:21:54.840 "product_name": "Malloc disk", 00:21:54.840 "block_size": 512, 00:21:54.840 "num_blocks": 65536, 00:21:54.840 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:54.840 "assigned_rate_limits": { 00:21:54.840 "rw_ios_per_sec": 0, 00:21:54.841 "rw_mbytes_per_sec": 0, 00:21:54.841 "r_mbytes_per_sec": 0, 00:21:54.841 "w_mbytes_per_sec": 0 00:21:54.841 }, 00:21:54.841 "claimed": true, 00:21:54.841 "claim_type": "exclusive_write", 00:21:54.841 "zoned": false, 00:21:54.841 "supported_io_types": { 00:21:54.841 "read": true, 00:21:54.841 "write": true, 00:21:54.841 "unmap": true, 00:21:54.841 "flush": true, 00:21:54.841 "reset": true, 00:21:54.841 "nvme_admin": false, 00:21:54.841 "nvme_io": false, 00:21:54.841 "nvme_io_md": false, 00:21:54.841 "write_zeroes": true, 00:21:54.841 "zcopy": true, 00:21:54.841 "get_zone_info": false, 00:21:54.841 "zone_management": false, 00:21:54.841 "zone_append": false, 00:21:54.841 "compare": false, 00:21:54.841 "compare_and_write": false, 00:21:54.841 "abort": true, 00:21:54.841 "seek_hole": false, 00:21:54.841 "seek_data": false, 00:21:54.841 "copy": true, 00:21:54.841 "nvme_iov_md": false 00:21:54.841 }, 00:21:54.841 "memory_domains": [ 00:21:54.841 { 00:21:54.841 "dma_device_id": "system", 00:21:54.841 "dma_device_type": 1 00:21:54.841 }, 00:21:54.841 { 00:21:54.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.841 "dma_device_type": 2 00:21:54.841 } 00:21:54.841 ], 00:21:54.841 "driver_specific": {} 00:21:54.841 } 00:21:54.841 ] 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.841 "name": "Existed_Raid", 00:21:54.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.841 "strip_size_kb": 64, 00:21:54.841 "state": "configuring", 00:21:54.841 "raid_level": "raid5f", 00:21:54.841 "superblock": false, 00:21:54.841 "num_base_bdevs": 3, 00:21:54.841 "num_base_bdevs_discovered": 2, 00:21:54.841 "num_base_bdevs_operational": 3, 00:21:54.841 "base_bdevs_list": [ 00:21:54.841 { 00:21:54.841 "name": "BaseBdev1", 00:21:54.841 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:54.841 "is_configured": true, 00:21:54.841 "data_offset": 0, 00:21:54.841 "data_size": 65536 00:21:54.841 }, 00:21:54.841 { 00:21:54.841 "name": null, 00:21:54.841 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:54.841 "is_configured": false, 00:21:54.841 "data_offset": 0, 00:21:54.841 "data_size": 65536 00:21:54.841 }, 00:21:54.841 { 00:21:54.841 "name": "BaseBdev3", 00:21:54.841 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:54.841 "is_configured": true, 00:21:54.841 "data_offset": 0, 00:21:54.841 "data_size": 65536 00:21:54.841 } 00:21:54.841 ] 00:21:54.841 }' 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.841 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.100 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.100 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.100 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.100 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:55.100 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.363 [2024-12-09 23:03:10.981197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.363 23:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.363 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.363 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.363 "name": "Existed_Raid", 00:21:55.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.363 "strip_size_kb": 64, 00:21:55.363 "state": "configuring", 00:21:55.363 "raid_level": "raid5f", 00:21:55.363 "superblock": false, 00:21:55.363 "num_base_bdevs": 3, 00:21:55.363 "num_base_bdevs_discovered": 1, 00:21:55.363 "num_base_bdevs_operational": 3, 00:21:55.363 "base_bdevs_list": [ 00:21:55.363 { 00:21:55.363 "name": "BaseBdev1", 00:21:55.363 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:55.363 "is_configured": true, 00:21:55.363 "data_offset": 0, 00:21:55.363 "data_size": 65536 00:21:55.363 }, 00:21:55.363 { 00:21:55.363 "name": null, 00:21:55.363 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:55.363 "is_configured": false, 00:21:55.363 "data_offset": 0, 00:21:55.363 "data_size": 65536 00:21:55.363 }, 00:21:55.363 { 00:21:55.363 "name": null, 00:21:55.363 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:55.363 "is_configured": false, 00:21:55.363 "data_offset": 0, 00:21:55.363 "data_size": 65536 00:21:55.363 } 00:21:55.363 ] 00:21:55.363 }' 00:21:55.363 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.363 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.628 [2024-12-09 23:03:11.420604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.628 "name": "Existed_Raid", 00:21:55.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.628 "strip_size_kb": 64, 00:21:55.628 "state": "configuring", 00:21:55.628 "raid_level": "raid5f", 00:21:55.628 "superblock": false, 00:21:55.628 "num_base_bdevs": 3, 00:21:55.628 "num_base_bdevs_discovered": 2, 00:21:55.628 "num_base_bdevs_operational": 3, 00:21:55.628 "base_bdevs_list": [ 00:21:55.628 { 00:21:55.628 "name": "BaseBdev1", 00:21:55.628 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:55.628 "is_configured": true, 00:21:55.628 "data_offset": 0, 00:21:55.628 "data_size": 65536 00:21:55.628 }, 00:21:55.628 { 00:21:55.628 "name": null, 00:21:55.628 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:55.628 "is_configured": false, 00:21:55.628 "data_offset": 0, 00:21:55.628 "data_size": 65536 00:21:55.628 }, 00:21:55.628 { 00:21:55.628 "name": "BaseBdev3", 00:21:55.628 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:55.628 "is_configured": true, 00:21:55.628 "data_offset": 0, 00:21:55.628 "data_size": 65536 00:21:55.628 } 00:21:55.628 ] 00:21:55.628 }' 00:21:55.628 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.888 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.147 23:03:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.147 [2024-12-09 23:03:11.907747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.406 "name": "Existed_Raid", 00:21:56.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.406 "strip_size_kb": 64, 00:21:56.406 "state": "configuring", 00:21:56.406 "raid_level": "raid5f", 00:21:56.406 "superblock": false, 00:21:56.406 "num_base_bdevs": 3, 00:21:56.406 "num_base_bdevs_discovered": 1, 00:21:56.406 "num_base_bdevs_operational": 3, 00:21:56.406 "base_bdevs_list": [ 00:21:56.406 { 00:21:56.406 "name": null, 00:21:56.406 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:56.406 "is_configured": false, 00:21:56.406 "data_offset": 0, 00:21:56.406 "data_size": 65536 00:21:56.406 }, 00:21:56.406 { 00:21:56.406 "name": null, 00:21:56.406 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:56.406 "is_configured": false, 00:21:56.406 "data_offset": 0, 00:21:56.406 "data_size": 65536 00:21:56.406 }, 00:21:56.406 { 00:21:56.406 "name": "BaseBdev3", 00:21:56.406 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:56.406 "is_configured": true, 00:21:56.406 "data_offset": 0, 00:21:56.406 "data_size": 65536 00:21:56.406 } 00:21:56.406 ] 00:21:56.406 }' 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.406 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.665 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.666 [2024-12-09 23:03:12.453593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.666 "name": "Existed_Raid", 00:21:56.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.666 "strip_size_kb": 64, 00:21:56.666 "state": "configuring", 00:21:56.666 "raid_level": "raid5f", 00:21:56.666 "superblock": false, 00:21:56.666 "num_base_bdevs": 3, 00:21:56.666 "num_base_bdevs_discovered": 2, 00:21:56.666 "num_base_bdevs_operational": 3, 00:21:56.666 "base_bdevs_list": [ 00:21:56.666 { 00:21:56.666 "name": null, 00:21:56.666 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:56.666 "is_configured": false, 00:21:56.666 "data_offset": 0, 00:21:56.666 "data_size": 65536 00:21:56.666 }, 00:21:56.666 { 00:21:56.666 "name": "BaseBdev2", 00:21:56.666 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:56.666 "is_configured": true, 00:21:56.666 "data_offset": 0, 00:21:56.666 "data_size": 65536 00:21:56.666 }, 00:21:56.666 { 00:21:56.666 "name": "BaseBdev3", 00:21:56.666 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:56.666 "is_configured": true, 00:21:56.666 "data_offset": 0, 00:21:56.666 "data_size": 65536 00:21:56.666 } 00:21:56.666 ] 00:21:56.666 }' 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.666 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd0854ed-7172-4b9a-81fb-38ab94b29696 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 [2024-12-09 23:03:12.991395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:57.236 [2024-12-09 23:03:12.991501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:57.236 [2024-12-09 23:03:12.991515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:57.236 [2024-12-09 23:03:12.991818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:57.236 [2024-12-09 23:03:12.998212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:57.236 [2024-12-09 23:03:12.998260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:57.236 [2024-12-09 23:03:12.998653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.236 NewBaseBdev 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:57.236 23:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 [ 00:21:57.236 { 00:21:57.236 "name": "NewBaseBdev", 00:21:57.236 "aliases": [ 00:21:57.236 "fd0854ed-7172-4b9a-81fb-38ab94b29696" 00:21:57.236 ], 00:21:57.236 "product_name": "Malloc disk", 00:21:57.236 "block_size": 512, 00:21:57.236 "num_blocks": 65536, 00:21:57.236 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:57.236 "assigned_rate_limits": { 00:21:57.236 "rw_ios_per_sec": 0, 00:21:57.236 "rw_mbytes_per_sec": 0, 00:21:57.236 "r_mbytes_per_sec": 0, 00:21:57.236 "w_mbytes_per_sec": 0 00:21:57.236 }, 00:21:57.236 "claimed": true, 00:21:57.236 "claim_type": "exclusive_write", 00:21:57.236 "zoned": false, 00:21:57.236 "supported_io_types": { 00:21:57.236 "read": true, 00:21:57.236 "write": true, 00:21:57.236 "unmap": true, 00:21:57.236 "flush": true, 00:21:57.236 "reset": true, 00:21:57.236 "nvme_admin": false, 00:21:57.236 "nvme_io": false, 00:21:57.236 "nvme_io_md": false, 00:21:57.236 "write_zeroes": true, 00:21:57.236 "zcopy": true, 00:21:57.236 "get_zone_info": false, 00:21:57.236 "zone_management": false, 00:21:57.236 "zone_append": false, 00:21:57.236 "compare": false, 00:21:57.236 "compare_and_write": false, 00:21:57.236 "abort": true, 00:21:57.236 "seek_hole": false, 00:21:57.236 "seek_data": false, 00:21:57.236 "copy": true, 00:21:57.236 "nvme_iov_md": false 00:21:57.236 }, 00:21:57.236 "memory_domains": [ 00:21:57.236 { 00:21:57.236 "dma_device_id": "system", 00:21:57.236 "dma_device_type": 1 00:21:57.236 }, 00:21:57.236 { 00:21:57.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.236 "dma_device_type": 2 00:21:57.236 } 00:21:57.236 ], 00:21:57.236 "driver_specific": {} 00:21:57.236 } 00:21:57.236 ] 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.236 "name": "Existed_Raid", 00:21:57.236 "uuid": "74257da5-8695-498c-b302-c9e89be38ab0", 00:21:57.236 "strip_size_kb": 64, 00:21:57.236 "state": "online", 00:21:57.236 "raid_level": "raid5f", 00:21:57.236 "superblock": false, 00:21:57.236 "num_base_bdevs": 3, 00:21:57.236 "num_base_bdevs_discovered": 3, 00:21:57.236 "num_base_bdevs_operational": 3, 00:21:57.236 "base_bdevs_list": [ 00:21:57.236 { 00:21:57.236 "name": "NewBaseBdev", 00:21:57.236 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:57.236 "is_configured": true, 00:21:57.236 "data_offset": 0, 00:21:57.236 "data_size": 65536 00:21:57.236 }, 00:21:57.236 { 00:21:57.236 "name": "BaseBdev2", 00:21:57.236 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:57.236 "is_configured": true, 00:21:57.236 "data_offset": 0, 00:21:57.236 "data_size": 65536 00:21:57.236 }, 00:21:57.236 { 00:21:57.236 "name": "BaseBdev3", 00:21:57.236 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:57.236 "is_configured": true, 00:21:57.236 "data_offset": 0, 00:21:57.236 "data_size": 65536 00:21:57.236 } 00:21:57.236 ] 00:21:57.236 }' 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.236 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.811 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:57.811 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:57.811 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.812 [2024-12-09 23:03:13.522069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:57.812 "name": "Existed_Raid", 00:21:57.812 "aliases": [ 00:21:57.812 "74257da5-8695-498c-b302-c9e89be38ab0" 00:21:57.812 ], 00:21:57.812 "product_name": "Raid Volume", 00:21:57.812 "block_size": 512, 00:21:57.812 "num_blocks": 131072, 00:21:57.812 "uuid": "74257da5-8695-498c-b302-c9e89be38ab0", 00:21:57.812 "assigned_rate_limits": { 00:21:57.812 "rw_ios_per_sec": 0, 00:21:57.812 "rw_mbytes_per_sec": 0, 00:21:57.812 "r_mbytes_per_sec": 0, 00:21:57.812 "w_mbytes_per_sec": 0 00:21:57.812 }, 00:21:57.812 "claimed": false, 00:21:57.812 "zoned": false, 00:21:57.812 "supported_io_types": { 00:21:57.812 "read": true, 00:21:57.812 "write": true, 00:21:57.812 "unmap": false, 00:21:57.812 "flush": false, 00:21:57.812 "reset": true, 00:21:57.812 "nvme_admin": false, 00:21:57.812 "nvme_io": false, 00:21:57.812 "nvme_io_md": false, 00:21:57.812 "write_zeroes": true, 00:21:57.812 "zcopy": false, 00:21:57.812 "get_zone_info": false, 00:21:57.812 "zone_management": false, 00:21:57.812 "zone_append": false, 00:21:57.812 "compare": false, 00:21:57.812 "compare_and_write": false, 00:21:57.812 "abort": false, 00:21:57.812 "seek_hole": false, 00:21:57.812 "seek_data": false, 00:21:57.812 "copy": false, 00:21:57.812 "nvme_iov_md": false 00:21:57.812 }, 00:21:57.812 "driver_specific": { 00:21:57.812 "raid": { 00:21:57.812 "uuid": "74257da5-8695-498c-b302-c9e89be38ab0", 00:21:57.812 "strip_size_kb": 64, 00:21:57.812 "state": "online", 00:21:57.812 "raid_level": "raid5f", 00:21:57.812 "superblock": false, 00:21:57.812 "num_base_bdevs": 3, 00:21:57.812 "num_base_bdevs_discovered": 3, 00:21:57.812 "num_base_bdevs_operational": 3, 00:21:57.812 "base_bdevs_list": [ 00:21:57.812 { 00:21:57.812 "name": "NewBaseBdev", 00:21:57.812 "uuid": "fd0854ed-7172-4b9a-81fb-38ab94b29696", 00:21:57.812 "is_configured": true, 00:21:57.812 "data_offset": 0, 00:21:57.812 "data_size": 65536 00:21:57.812 }, 00:21:57.812 { 00:21:57.812 "name": "BaseBdev2", 00:21:57.812 "uuid": "97d9e913-aefe-4ada-88ec-e569e752b37e", 00:21:57.812 "is_configured": true, 00:21:57.812 "data_offset": 0, 00:21:57.812 "data_size": 65536 00:21:57.812 }, 00:21:57.812 { 00:21:57.812 "name": "BaseBdev3", 00:21:57.812 "uuid": "f154c137-708a-4d10-8c6b-f01677555b37", 00:21:57.812 "is_configured": true, 00:21:57.812 "data_offset": 0, 00:21:57.812 "data_size": 65536 00:21:57.812 } 00:21:57.812 ] 00:21:57.812 } 00:21:57.812 } 00:21:57.812 }' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:57.812 BaseBdev2 00:21:57.812 BaseBdev3' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:57.812 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.073 [2024-12-09 23:03:13.769658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:58.073 [2024-12-09 23:03:13.769710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:58.073 [2024-12-09 23:03:13.769814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.073 [2024-12-09 23:03:13.770165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.073 [2024-12-09 23:03:13.770189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80556 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80556 ']' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80556 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80556 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80556' 00:21:58.073 killing process with pid 80556 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80556 00:21:58.073 [2024-12-09 23:03:13.809813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.073 23:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80556 00:21:58.333 [2024-12-09 23:03:14.177920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.722 23:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:59.722 ************************************ 00:21:59.723 END TEST raid5f_state_function_test 00:21:59.723 ************************************ 00:21:59.723 00:21:59.723 real 0m10.784s 00:21:59.723 user 0m16.634s 00:21:59.723 sys 0m1.912s 00:21:59.723 23:03:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.723 23:03:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.982 23:03:15 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:59.982 23:03:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:59.982 23:03:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.982 23:03:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.982 ************************************ 00:21:59.982 START TEST raid5f_state_function_test_sb 00:21:59.982 ************************************ 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81174 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81174' 00:21:59.982 Process raid pid: 81174 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81174 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81174 ']' 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.982 23:03:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.982 [2024-12-09 23:03:15.711079] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:21:59.982 [2024-12-09 23:03:15.711226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.242 [2024-12-09 23:03:15.887148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.242 [2024-12-09 23:03:16.027759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.502 [2024-12-09 23:03:16.282835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.502 [2024-12-09 23:03:16.282897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.070 [2024-12-09 23:03:16.682840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:01.070 [2024-12-09 23:03:16.682924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:01.070 [2024-12-09 23:03:16.682949] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:01.070 [2024-12-09 23:03:16.682968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:01.070 [2024-12-09 23:03:16.682980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:01.070 [2024-12-09 23:03:16.682998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.070 "name": "Existed_Raid", 00:22:01.070 "uuid": "a25be0d3-89bf-49d5-b667-47d8199b2a3e", 00:22:01.070 "strip_size_kb": 64, 00:22:01.070 "state": "configuring", 00:22:01.070 "raid_level": "raid5f", 00:22:01.070 "superblock": true, 00:22:01.070 "num_base_bdevs": 3, 00:22:01.070 "num_base_bdevs_discovered": 0, 00:22:01.070 "num_base_bdevs_operational": 3, 00:22:01.070 "base_bdevs_list": [ 00:22:01.070 { 00:22:01.070 "name": "BaseBdev1", 00:22:01.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.070 "is_configured": false, 00:22:01.070 "data_offset": 0, 00:22:01.070 "data_size": 0 00:22:01.070 }, 00:22:01.070 { 00:22:01.070 "name": "BaseBdev2", 00:22:01.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.070 "is_configured": false, 00:22:01.070 "data_offset": 0, 00:22:01.070 "data_size": 0 00:22:01.070 }, 00:22:01.070 { 00:22:01.070 "name": "BaseBdev3", 00:22:01.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.070 "is_configured": false, 00:22:01.070 "data_offset": 0, 00:22:01.070 "data_size": 0 00:22:01.070 } 00:22:01.070 ] 00:22:01.070 }' 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.070 23:03:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.328 [2024-12-09 23:03:17.118073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:01.328 [2024-12-09 23:03:17.118141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.328 [2024-12-09 23:03:17.126088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:01.328 [2024-12-09 23:03:17.126162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:01.328 [2024-12-09 23:03:17.126174] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:01.328 [2024-12-09 23:03:17.126186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:01.328 [2024-12-09 23:03:17.126194] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:01.328 [2024-12-09 23:03:17.126205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.328 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.328 [2024-12-09 23:03:17.173303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.329 BaseBdev1 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.329 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.587 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.587 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:01.587 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.587 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.587 [ 00:22:01.587 { 00:22:01.587 "name": "BaseBdev1", 00:22:01.587 "aliases": [ 00:22:01.587 "6efb4fef-66a0-472e-bb3e-7755e45405db" 00:22:01.588 ], 00:22:01.588 "product_name": "Malloc disk", 00:22:01.588 "block_size": 512, 00:22:01.588 "num_blocks": 65536, 00:22:01.588 "uuid": "6efb4fef-66a0-472e-bb3e-7755e45405db", 00:22:01.588 "assigned_rate_limits": { 00:22:01.588 "rw_ios_per_sec": 0, 00:22:01.588 "rw_mbytes_per_sec": 0, 00:22:01.588 "r_mbytes_per_sec": 0, 00:22:01.588 "w_mbytes_per_sec": 0 00:22:01.588 }, 00:22:01.588 "claimed": true, 00:22:01.588 "claim_type": "exclusive_write", 00:22:01.588 "zoned": false, 00:22:01.588 "supported_io_types": { 00:22:01.588 "read": true, 00:22:01.588 "write": true, 00:22:01.588 "unmap": true, 00:22:01.588 "flush": true, 00:22:01.588 "reset": true, 00:22:01.588 "nvme_admin": false, 00:22:01.588 "nvme_io": false, 00:22:01.588 "nvme_io_md": false, 00:22:01.588 "write_zeroes": true, 00:22:01.588 "zcopy": true, 00:22:01.588 "get_zone_info": false, 00:22:01.588 "zone_management": false, 00:22:01.588 "zone_append": false, 00:22:01.588 "compare": false, 00:22:01.588 "compare_and_write": false, 00:22:01.588 "abort": true, 00:22:01.588 "seek_hole": false, 00:22:01.588 "seek_data": false, 00:22:01.588 "copy": true, 00:22:01.588 "nvme_iov_md": false 00:22:01.588 }, 00:22:01.588 "memory_domains": [ 00:22:01.588 { 00:22:01.588 "dma_device_id": "system", 00:22:01.588 "dma_device_type": 1 00:22:01.588 }, 00:22:01.588 { 00:22:01.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.588 "dma_device_type": 2 00:22:01.588 } 00:22:01.588 ], 00:22:01.588 "driver_specific": {} 00:22:01.588 } 00:22:01.588 ] 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.588 "name": "Existed_Raid", 00:22:01.588 "uuid": "21c8d838-4c86-491e-9cae-bcb6ffd6e0bf", 00:22:01.588 "strip_size_kb": 64, 00:22:01.588 "state": "configuring", 00:22:01.588 "raid_level": "raid5f", 00:22:01.588 "superblock": true, 00:22:01.588 "num_base_bdevs": 3, 00:22:01.588 "num_base_bdevs_discovered": 1, 00:22:01.588 "num_base_bdevs_operational": 3, 00:22:01.588 "base_bdevs_list": [ 00:22:01.588 { 00:22:01.588 "name": "BaseBdev1", 00:22:01.588 "uuid": "6efb4fef-66a0-472e-bb3e-7755e45405db", 00:22:01.588 "is_configured": true, 00:22:01.588 "data_offset": 2048, 00:22:01.588 "data_size": 63488 00:22:01.588 }, 00:22:01.588 { 00:22:01.588 "name": "BaseBdev2", 00:22:01.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.588 "is_configured": false, 00:22:01.588 "data_offset": 0, 00:22:01.588 "data_size": 0 00:22:01.588 }, 00:22:01.588 { 00:22:01.588 "name": "BaseBdev3", 00:22:01.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.588 "is_configured": false, 00:22:01.588 "data_offset": 0, 00:22:01.588 "data_size": 0 00:22:01.588 } 00:22:01.588 ] 00:22:01.588 }' 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.588 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.847 [2024-12-09 23:03:17.644667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:01.847 [2024-12-09 23:03:17.644748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.847 [2024-12-09 23:03:17.656763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.847 [2024-12-09 23:03:17.659004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:01.847 [2024-12-09 23:03:17.659068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:01.847 [2024-12-09 23:03:17.659081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:01.847 [2024-12-09 23:03:17.659092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.847 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.107 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.107 "name": "Existed_Raid", 00:22:02.107 "uuid": "441c04db-81c1-4a02-b78c-0ceb9f81debb", 00:22:02.107 "strip_size_kb": 64, 00:22:02.107 "state": "configuring", 00:22:02.107 "raid_level": "raid5f", 00:22:02.107 "superblock": true, 00:22:02.107 "num_base_bdevs": 3, 00:22:02.107 "num_base_bdevs_discovered": 1, 00:22:02.107 "num_base_bdevs_operational": 3, 00:22:02.107 "base_bdevs_list": [ 00:22:02.107 { 00:22:02.107 "name": "BaseBdev1", 00:22:02.107 "uuid": "6efb4fef-66a0-472e-bb3e-7755e45405db", 00:22:02.107 "is_configured": true, 00:22:02.107 "data_offset": 2048, 00:22:02.107 "data_size": 63488 00:22:02.107 }, 00:22:02.107 { 00:22:02.107 "name": "BaseBdev2", 00:22:02.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.107 "is_configured": false, 00:22:02.107 "data_offset": 0, 00:22:02.107 "data_size": 0 00:22:02.107 }, 00:22:02.107 { 00:22:02.107 "name": "BaseBdev3", 00:22:02.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.107 "is_configured": false, 00:22:02.107 "data_offset": 0, 00:22:02.107 "data_size": 0 00:22:02.107 } 00:22:02.107 ] 00:22:02.107 }' 00:22:02.107 23:03:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.107 23:03:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.365 [2024-12-09 23:03:18.176878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.365 BaseBdev2 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.365 [ 00:22:02.365 { 00:22:02.365 "name": "BaseBdev2", 00:22:02.365 "aliases": [ 00:22:02.365 "76abad63-bc8c-4abe-820a-4f18814b86e8" 00:22:02.365 ], 00:22:02.365 "product_name": "Malloc disk", 00:22:02.365 "block_size": 512, 00:22:02.365 "num_blocks": 65536, 00:22:02.365 "uuid": "76abad63-bc8c-4abe-820a-4f18814b86e8", 00:22:02.365 "assigned_rate_limits": { 00:22:02.365 "rw_ios_per_sec": 0, 00:22:02.365 "rw_mbytes_per_sec": 0, 00:22:02.365 "r_mbytes_per_sec": 0, 00:22:02.365 "w_mbytes_per_sec": 0 00:22:02.365 }, 00:22:02.365 "claimed": true, 00:22:02.365 "claim_type": "exclusive_write", 00:22:02.365 "zoned": false, 00:22:02.365 "supported_io_types": { 00:22:02.365 "read": true, 00:22:02.365 "write": true, 00:22:02.365 "unmap": true, 00:22:02.365 "flush": true, 00:22:02.365 "reset": true, 00:22:02.365 "nvme_admin": false, 00:22:02.365 "nvme_io": false, 00:22:02.365 "nvme_io_md": false, 00:22:02.365 "write_zeroes": true, 00:22:02.365 "zcopy": true, 00:22:02.365 "get_zone_info": false, 00:22:02.365 "zone_management": false, 00:22:02.365 "zone_append": false, 00:22:02.365 "compare": false, 00:22:02.365 "compare_and_write": false, 00:22:02.365 "abort": true, 00:22:02.365 "seek_hole": false, 00:22:02.365 "seek_data": false, 00:22:02.365 "copy": true, 00:22:02.365 "nvme_iov_md": false 00:22:02.365 }, 00:22:02.365 "memory_domains": [ 00:22:02.365 { 00:22:02.365 "dma_device_id": "system", 00:22:02.365 "dma_device_type": 1 00:22:02.365 }, 00:22:02.365 { 00:22:02.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.365 "dma_device_type": 2 00:22:02.365 } 00:22:02.365 ], 00:22:02.365 "driver_specific": {} 00:22:02.365 } 00:22:02.365 ] 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.365 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.624 "name": "Existed_Raid", 00:22:02.624 "uuid": "441c04db-81c1-4a02-b78c-0ceb9f81debb", 00:22:02.624 "strip_size_kb": 64, 00:22:02.624 "state": "configuring", 00:22:02.624 "raid_level": "raid5f", 00:22:02.624 "superblock": true, 00:22:02.624 "num_base_bdevs": 3, 00:22:02.624 "num_base_bdevs_discovered": 2, 00:22:02.624 "num_base_bdevs_operational": 3, 00:22:02.624 "base_bdevs_list": [ 00:22:02.624 { 00:22:02.624 "name": "BaseBdev1", 00:22:02.624 "uuid": "6efb4fef-66a0-472e-bb3e-7755e45405db", 00:22:02.624 "is_configured": true, 00:22:02.624 "data_offset": 2048, 00:22:02.624 "data_size": 63488 00:22:02.624 }, 00:22:02.624 { 00:22:02.624 "name": "BaseBdev2", 00:22:02.624 "uuid": "76abad63-bc8c-4abe-820a-4f18814b86e8", 00:22:02.624 "is_configured": true, 00:22:02.624 "data_offset": 2048, 00:22:02.624 "data_size": 63488 00:22:02.624 }, 00:22:02.624 { 00:22:02.624 "name": "BaseBdev3", 00:22:02.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.624 "is_configured": false, 00:22:02.624 "data_offset": 0, 00:22:02.624 "data_size": 0 00:22:02.624 } 00:22:02.624 ] 00:22:02.624 }' 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.624 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.881 [2024-12-09 23:03:18.710505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.881 [2024-12-09 23:03:18.710851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:02.881 [2024-12-09 23:03:18.710882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:02.881 BaseBdev3 00:22:02.881 [2024-12-09 23:03:18.711361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.881 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.881 [2024-12-09 23:03:18.718270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:02.881 [2024-12-09 23:03:18.718317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:02.882 [2024-12-09 23:03:18.718715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.882 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.882 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:02.882 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.882 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.882 [ 00:22:02.882 { 00:22:02.882 "name": "BaseBdev3", 00:22:02.882 "aliases": [ 00:22:02.882 "84c81201-a7c8-462b-ba4a-737346971a6c" 00:22:02.882 ], 00:22:02.882 "product_name": "Malloc disk", 00:22:02.882 "block_size": 512, 00:22:02.882 "num_blocks": 65536, 00:22:02.882 "uuid": "84c81201-a7c8-462b-ba4a-737346971a6c", 00:22:02.882 "assigned_rate_limits": { 00:22:02.882 "rw_ios_per_sec": 0, 00:22:02.882 "rw_mbytes_per_sec": 0, 00:22:02.882 "r_mbytes_per_sec": 0, 00:22:03.140 "w_mbytes_per_sec": 0 00:22:03.140 }, 00:22:03.140 "claimed": true, 00:22:03.140 "claim_type": "exclusive_write", 00:22:03.140 "zoned": false, 00:22:03.140 "supported_io_types": { 00:22:03.140 "read": true, 00:22:03.140 "write": true, 00:22:03.140 "unmap": true, 00:22:03.140 "flush": true, 00:22:03.140 "reset": true, 00:22:03.140 "nvme_admin": false, 00:22:03.140 "nvme_io": false, 00:22:03.140 "nvme_io_md": false, 00:22:03.140 "write_zeroes": true, 00:22:03.140 "zcopy": true, 00:22:03.140 "get_zone_info": false, 00:22:03.140 "zone_management": false, 00:22:03.140 "zone_append": false, 00:22:03.140 "compare": false, 00:22:03.140 "compare_and_write": false, 00:22:03.140 "abort": true, 00:22:03.140 "seek_hole": false, 00:22:03.140 "seek_data": false, 00:22:03.140 "copy": true, 00:22:03.140 "nvme_iov_md": false 00:22:03.140 }, 00:22:03.140 "memory_domains": [ 00:22:03.140 { 00:22:03.140 "dma_device_id": "system", 00:22:03.140 "dma_device_type": 1 00:22:03.140 }, 00:22:03.140 { 00:22:03.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.140 "dma_device_type": 2 00:22:03.140 } 00:22:03.140 ], 00:22:03.140 "driver_specific": {} 00:22:03.140 } 00:22:03.140 ] 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.140 "name": "Existed_Raid", 00:22:03.140 "uuid": "441c04db-81c1-4a02-b78c-0ceb9f81debb", 00:22:03.140 "strip_size_kb": 64, 00:22:03.140 "state": "online", 00:22:03.140 "raid_level": "raid5f", 00:22:03.140 "superblock": true, 00:22:03.140 "num_base_bdevs": 3, 00:22:03.140 "num_base_bdevs_discovered": 3, 00:22:03.140 "num_base_bdevs_operational": 3, 00:22:03.140 "base_bdevs_list": [ 00:22:03.140 { 00:22:03.140 "name": "BaseBdev1", 00:22:03.140 "uuid": "6efb4fef-66a0-472e-bb3e-7755e45405db", 00:22:03.140 "is_configured": true, 00:22:03.140 "data_offset": 2048, 00:22:03.140 "data_size": 63488 00:22:03.140 }, 00:22:03.140 { 00:22:03.140 "name": "BaseBdev2", 00:22:03.140 "uuid": "76abad63-bc8c-4abe-820a-4f18814b86e8", 00:22:03.140 "is_configured": true, 00:22:03.140 "data_offset": 2048, 00:22:03.140 "data_size": 63488 00:22:03.140 }, 00:22:03.140 { 00:22:03.140 "name": "BaseBdev3", 00:22:03.140 "uuid": "84c81201-a7c8-462b-ba4a-737346971a6c", 00:22:03.140 "is_configured": true, 00:22:03.140 "data_offset": 2048, 00:22:03.140 "data_size": 63488 00:22:03.140 } 00:22:03.140 ] 00:22:03.140 }' 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.140 23:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.399 [2024-12-09 23:03:19.233890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.399 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.658 "name": "Existed_Raid", 00:22:03.658 "aliases": [ 00:22:03.658 "441c04db-81c1-4a02-b78c-0ceb9f81debb" 00:22:03.658 ], 00:22:03.658 "product_name": "Raid Volume", 00:22:03.658 "block_size": 512, 00:22:03.658 "num_blocks": 126976, 00:22:03.658 "uuid": "441c04db-81c1-4a02-b78c-0ceb9f81debb", 00:22:03.658 "assigned_rate_limits": { 00:22:03.658 "rw_ios_per_sec": 0, 00:22:03.658 "rw_mbytes_per_sec": 0, 00:22:03.658 "r_mbytes_per_sec": 0, 00:22:03.658 "w_mbytes_per_sec": 0 00:22:03.658 }, 00:22:03.658 "claimed": false, 00:22:03.658 "zoned": false, 00:22:03.658 "supported_io_types": { 00:22:03.658 "read": true, 00:22:03.658 "write": true, 00:22:03.658 "unmap": false, 00:22:03.658 "flush": false, 00:22:03.658 "reset": true, 00:22:03.658 "nvme_admin": false, 00:22:03.658 "nvme_io": false, 00:22:03.658 "nvme_io_md": false, 00:22:03.658 "write_zeroes": true, 00:22:03.658 "zcopy": false, 00:22:03.658 "get_zone_info": false, 00:22:03.658 "zone_management": false, 00:22:03.658 "zone_append": false, 00:22:03.658 "compare": false, 00:22:03.658 "compare_and_write": false, 00:22:03.658 "abort": false, 00:22:03.658 "seek_hole": false, 00:22:03.658 "seek_data": false, 00:22:03.658 "copy": false, 00:22:03.658 "nvme_iov_md": false 00:22:03.658 }, 00:22:03.658 "driver_specific": { 00:22:03.658 "raid": { 00:22:03.658 "uuid": "441c04db-81c1-4a02-b78c-0ceb9f81debb", 00:22:03.658 "strip_size_kb": 64, 00:22:03.658 "state": "online", 00:22:03.658 "raid_level": "raid5f", 00:22:03.658 "superblock": true, 00:22:03.658 "num_base_bdevs": 3, 00:22:03.658 "num_base_bdevs_discovered": 3, 00:22:03.658 "num_base_bdevs_operational": 3, 00:22:03.658 "base_bdevs_list": [ 00:22:03.658 { 00:22:03.658 "name": "BaseBdev1", 00:22:03.658 "uuid": "6efb4fef-66a0-472e-bb3e-7755e45405db", 00:22:03.658 "is_configured": true, 00:22:03.658 "data_offset": 2048, 00:22:03.658 "data_size": 63488 00:22:03.658 }, 00:22:03.658 { 00:22:03.658 "name": "BaseBdev2", 00:22:03.658 "uuid": "76abad63-bc8c-4abe-820a-4f18814b86e8", 00:22:03.658 "is_configured": true, 00:22:03.658 "data_offset": 2048, 00:22:03.658 "data_size": 63488 00:22:03.658 }, 00:22:03.658 { 00:22:03.658 "name": "BaseBdev3", 00:22:03.658 "uuid": "84c81201-a7c8-462b-ba4a-737346971a6c", 00:22:03.658 "is_configured": true, 00:22:03.658 "data_offset": 2048, 00:22:03.658 "data_size": 63488 00:22:03.658 } 00:22:03.658 ] 00:22:03.658 } 00:22:03.658 } 00:22:03.658 }' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:03.658 BaseBdev2 00:22:03.658 BaseBdev3' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:03.658 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.923 [2024-12-09 23:03:19.517238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.923 "name": "Existed_Raid", 00:22:03.923 "uuid": "441c04db-81c1-4a02-b78c-0ceb9f81debb", 00:22:03.923 "strip_size_kb": 64, 00:22:03.923 "state": "online", 00:22:03.923 "raid_level": "raid5f", 00:22:03.923 "superblock": true, 00:22:03.923 "num_base_bdevs": 3, 00:22:03.923 "num_base_bdevs_discovered": 2, 00:22:03.923 "num_base_bdevs_operational": 2, 00:22:03.923 "base_bdevs_list": [ 00:22:03.923 { 00:22:03.923 "name": null, 00:22:03.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.923 "is_configured": false, 00:22:03.923 "data_offset": 0, 00:22:03.923 "data_size": 63488 00:22:03.923 }, 00:22:03.923 { 00:22:03.923 "name": "BaseBdev2", 00:22:03.923 "uuid": "76abad63-bc8c-4abe-820a-4f18814b86e8", 00:22:03.923 "is_configured": true, 00:22:03.923 "data_offset": 2048, 00:22:03.923 "data_size": 63488 00:22:03.923 }, 00:22:03.923 { 00:22:03.923 "name": "BaseBdev3", 00:22:03.923 "uuid": "84c81201-a7c8-462b-ba4a-737346971a6c", 00:22:03.923 "is_configured": true, 00:22:03.923 "data_offset": 2048, 00:22:03.923 "data_size": 63488 00:22:03.923 } 00:22:03.923 ] 00:22:03.923 }' 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.923 23:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.517 [2024-12-09 23:03:20.140322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:04.517 [2024-12-09 23:03:20.140537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.517 [2024-12-09 23:03:20.253662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.517 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.517 [2024-12-09 23:03:20.309656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:04.517 [2024-12-09 23:03:20.309745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.776 BaseBdev2 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.776 [ 00:22:04.776 { 00:22:04.776 "name": "BaseBdev2", 00:22:04.776 "aliases": [ 00:22:04.776 "ccf3e53a-b659-4924-a9ce-d3fb06290383" 00:22:04.776 ], 00:22:04.776 "product_name": "Malloc disk", 00:22:04.776 "block_size": 512, 00:22:04.776 "num_blocks": 65536, 00:22:04.776 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:04.776 "assigned_rate_limits": { 00:22:04.776 "rw_ios_per_sec": 0, 00:22:04.776 "rw_mbytes_per_sec": 0, 00:22:04.776 "r_mbytes_per_sec": 0, 00:22:04.776 "w_mbytes_per_sec": 0 00:22:04.776 }, 00:22:04.776 "claimed": false, 00:22:04.776 "zoned": false, 00:22:04.776 "supported_io_types": { 00:22:04.776 "read": true, 00:22:04.776 "write": true, 00:22:04.776 "unmap": true, 00:22:04.776 "flush": true, 00:22:04.776 "reset": true, 00:22:04.776 "nvme_admin": false, 00:22:04.776 "nvme_io": false, 00:22:04.776 "nvme_io_md": false, 00:22:04.776 "write_zeroes": true, 00:22:04.776 "zcopy": true, 00:22:04.776 "get_zone_info": false, 00:22:04.776 "zone_management": false, 00:22:04.776 "zone_append": false, 00:22:04.776 "compare": false, 00:22:04.776 "compare_and_write": false, 00:22:04.776 "abort": true, 00:22:04.776 "seek_hole": false, 00:22:04.776 "seek_data": false, 00:22:04.776 "copy": true, 00:22:04.776 "nvme_iov_md": false 00:22:04.776 }, 00:22:04.776 "memory_domains": [ 00:22:04.776 { 00:22:04.776 "dma_device_id": "system", 00:22:04.776 "dma_device_type": 1 00:22:04.776 }, 00:22:04.776 { 00:22:04.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.776 "dma_device_type": 2 00:22:04.776 } 00:22:04.776 ], 00:22:04.776 "driver_specific": {} 00:22:04.776 } 00:22:04.776 ] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.776 BaseBdev3 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.776 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.035 [ 00:22:05.035 { 00:22:05.035 "name": "BaseBdev3", 00:22:05.035 "aliases": [ 00:22:05.035 "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98" 00:22:05.035 ], 00:22:05.035 "product_name": "Malloc disk", 00:22:05.035 "block_size": 512, 00:22:05.035 "num_blocks": 65536, 00:22:05.035 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:05.035 "assigned_rate_limits": { 00:22:05.035 "rw_ios_per_sec": 0, 00:22:05.035 "rw_mbytes_per_sec": 0, 00:22:05.035 "r_mbytes_per_sec": 0, 00:22:05.035 "w_mbytes_per_sec": 0 00:22:05.035 }, 00:22:05.035 "claimed": false, 00:22:05.035 "zoned": false, 00:22:05.035 "supported_io_types": { 00:22:05.035 "read": true, 00:22:05.035 "write": true, 00:22:05.035 "unmap": true, 00:22:05.035 "flush": true, 00:22:05.035 "reset": true, 00:22:05.035 "nvme_admin": false, 00:22:05.035 "nvme_io": false, 00:22:05.035 "nvme_io_md": false, 00:22:05.035 "write_zeroes": true, 00:22:05.035 "zcopy": true, 00:22:05.035 "get_zone_info": false, 00:22:05.035 "zone_management": false, 00:22:05.035 "zone_append": false, 00:22:05.035 "compare": false, 00:22:05.035 "compare_and_write": false, 00:22:05.035 "abort": true, 00:22:05.035 "seek_hole": false, 00:22:05.035 "seek_data": false, 00:22:05.035 "copy": true, 00:22:05.035 "nvme_iov_md": false 00:22:05.035 }, 00:22:05.035 "memory_domains": [ 00:22:05.035 { 00:22:05.035 "dma_device_id": "system", 00:22:05.035 "dma_device_type": 1 00:22:05.035 }, 00:22:05.035 { 00:22:05.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.035 "dma_device_type": 2 00:22:05.035 } 00:22:05.035 ], 00:22:05.035 "driver_specific": {} 00:22:05.035 } 00:22:05.035 ] 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.035 [2024-12-09 23:03:20.661985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.035 [2024-12-09 23:03:20.662064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.035 [2024-12-09 23:03:20.662101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:05.035 [2024-12-09 23:03:20.664451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.035 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.036 "name": "Existed_Raid", 00:22:05.036 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:05.036 "strip_size_kb": 64, 00:22:05.036 "state": "configuring", 00:22:05.036 "raid_level": "raid5f", 00:22:05.036 "superblock": true, 00:22:05.036 "num_base_bdevs": 3, 00:22:05.036 "num_base_bdevs_discovered": 2, 00:22:05.036 "num_base_bdevs_operational": 3, 00:22:05.036 "base_bdevs_list": [ 00:22:05.036 { 00:22:05.036 "name": "BaseBdev1", 00:22:05.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.036 "is_configured": false, 00:22:05.036 "data_offset": 0, 00:22:05.036 "data_size": 0 00:22:05.036 }, 00:22:05.036 { 00:22:05.036 "name": "BaseBdev2", 00:22:05.036 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:05.036 "is_configured": true, 00:22:05.036 "data_offset": 2048, 00:22:05.036 "data_size": 63488 00:22:05.036 }, 00:22:05.036 { 00:22:05.036 "name": "BaseBdev3", 00:22:05.036 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:05.036 "is_configured": true, 00:22:05.036 "data_offset": 2048, 00:22:05.036 "data_size": 63488 00:22:05.036 } 00:22:05.036 ] 00:22:05.036 }' 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.036 23:03:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.294 [2024-12-09 23:03:21.089241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.294 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.294 "name": "Existed_Raid", 00:22:05.294 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:05.294 "strip_size_kb": 64, 00:22:05.294 "state": "configuring", 00:22:05.294 "raid_level": "raid5f", 00:22:05.294 "superblock": true, 00:22:05.294 "num_base_bdevs": 3, 00:22:05.295 "num_base_bdevs_discovered": 1, 00:22:05.295 "num_base_bdevs_operational": 3, 00:22:05.295 "base_bdevs_list": [ 00:22:05.295 { 00:22:05.295 "name": "BaseBdev1", 00:22:05.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.295 "is_configured": false, 00:22:05.295 "data_offset": 0, 00:22:05.295 "data_size": 0 00:22:05.295 }, 00:22:05.295 { 00:22:05.295 "name": null, 00:22:05.295 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:05.295 "is_configured": false, 00:22:05.295 "data_offset": 0, 00:22:05.295 "data_size": 63488 00:22:05.295 }, 00:22:05.295 { 00:22:05.295 "name": "BaseBdev3", 00:22:05.295 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:05.295 "is_configured": true, 00:22:05.295 "data_offset": 2048, 00:22:05.295 "data_size": 63488 00:22:05.295 } 00:22:05.295 ] 00:22:05.295 }' 00:22:05.295 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.295 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.861 [2024-12-09 23:03:21.635737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.861 BaseBdev1 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.861 [ 00:22:05.861 { 00:22:05.861 "name": "BaseBdev1", 00:22:05.861 "aliases": [ 00:22:05.861 "04543a01-6c29-45cf-a7bc-83fccf721d87" 00:22:05.861 ], 00:22:05.861 "product_name": "Malloc disk", 00:22:05.861 "block_size": 512, 00:22:05.861 "num_blocks": 65536, 00:22:05.861 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:05.861 "assigned_rate_limits": { 00:22:05.861 "rw_ios_per_sec": 0, 00:22:05.861 "rw_mbytes_per_sec": 0, 00:22:05.861 "r_mbytes_per_sec": 0, 00:22:05.861 "w_mbytes_per_sec": 0 00:22:05.861 }, 00:22:05.861 "claimed": true, 00:22:05.861 "claim_type": "exclusive_write", 00:22:05.861 "zoned": false, 00:22:05.861 "supported_io_types": { 00:22:05.861 "read": true, 00:22:05.861 "write": true, 00:22:05.861 "unmap": true, 00:22:05.861 "flush": true, 00:22:05.861 "reset": true, 00:22:05.861 "nvme_admin": false, 00:22:05.861 "nvme_io": false, 00:22:05.861 "nvme_io_md": false, 00:22:05.861 "write_zeroes": true, 00:22:05.861 "zcopy": true, 00:22:05.861 "get_zone_info": false, 00:22:05.861 "zone_management": false, 00:22:05.861 "zone_append": false, 00:22:05.861 "compare": false, 00:22:05.861 "compare_and_write": false, 00:22:05.861 "abort": true, 00:22:05.861 "seek_hole": false, 00:22:05.861 "seek_data": false, 00:22:05.861 "copy": true, 00:22:05.861 "nvme_iov_md": false 00:22:05.861 }, 00:22:05.861 "memory_domains": [ 00:22:05.861 { 00:22:05.861 "dma_device_id": "system", 00:22:05.861 "dma_device_type": 1 00:22:05.861 }, 00:22:05.861 { 00:22:05.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.861 "dma_device_type": 2 00:22:05.861 } 00:22:05.861 ], 00:22:05.861 "driver_specific": {} 00:22:05.861 } 00:22:05.861 ] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.861 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.118 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.118 "name": "Existed_Raid", 00:22:06.118 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:06.118 "strip_size_kb": 64, 00:22:06.118 "state": "configuring", 00:22:06.118 "raid_level": "raid5f", 00:22:06.118 "superblock": true, 00:22:06.118 "num_base_bdevs": 3, 00:22:06.118 "num_base_bdevs_discovered": 2, 00:22:06.118 "num_base_bdevs_operational": 3, 00:22:06.118 "base_bdevs_list": [ 00:22:06.118 { 00:22:06.118 "name": "BaseBdev1", 00:22:06.118 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:06.118 "is_configured": true, 00:22:06.118 "data_offset": 2048, 00:22:06.118 "data_size": 63488 00:22:06.119 }, 00:22:06.119 { 00:22:06.119 "name": null, 00:22:06.119 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:06.119 "is_configured": false, 00:22:06.119 "data_offset": 0, 00:22:06.119 "data_size": 63488 00:22:06.119 }, 00:22:06.119 { 00:22:06.119 "name": "BaseBdev3", 00:22:06.119 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:06.119 "is_configured": true, 00:22:06.119 "data_offset": 2048, 00:22:06.119 "data_size": 63488 00:22:06.119 } 00:22:06.119 ] 00:22:06.119 }' 00:22:06.119 23:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.119 23:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.378 [2024-12-09 23:03:22.202911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.378 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.638 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.638 "name": "Existed_Raid", 00:22:06.638 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:06.638 "strip_size_kb": 64, 00:22:06.638 "state": "configuring", 00:22:06.638 "raid_level": "raid5f", 00:22:06.638 "superblock": true, 00:22:06.638 "num_base_bdevs": 3, 00:22:06.638 "num_base_bdevs_discovered": 1, 00:22:06.638 "num_base_bdevs_operational": 3, 00:22:06.638 "base_bdevs_list": [ 00:22:06.638 { 00:22:06.638 "name": "BaseBdev1", 00:22:06.638 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:06.638 "is_configured": true, 00:22:06.638 "data_offset": 2048, 00:22:06.638 "data_size": 63488 00:22:06.638 }, 00:22:06.638 { 00:22:06.638 "name": null, 00:22:06.638 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:06.638 "is_configured": false, 00:22:06.638 "data_offset": 0, 00:22:06.638 "data_size": 63488 00:22:06.638 }, 00:22:06.638 { 00:22:06.638 "name": null, 00:22:06.638 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:06.638 "is_configured": false, 00:22:06.638 "data_offset": 0, 00:22:06.638 "data_size": 63488 00:22:06.638 } 00:22:06.638 ] 00:22:06.638 }' 00:22:06.638 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.638 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.898 [2024-12-09 23:03:22.702130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.898 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.157 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.157 "name": "Existed_Raid", 00:22:07.157 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:07.158 "strip_size_kb": 64, 00:22:07.158 "state": "configuring", 00:22:07.158 "raid_level": "raid5f", 00:22:07.158 "superblock": true, 00:22:07.158 "num_base_bdevs": 3, 00:22:07.158 "num_base_bdevs_discovered": 2, 00:22:07.158 "num_base_bdevs_operational": 3, 00:22:07.158 "base_bdevs_list": [ 00:22:07.158 { 00:22:07.158 "name": "BaseBdev1", 00:22:07.158 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:07.158 "is_configured": true, 00:22:07.158 "data_offset": 2048, 00:22:07.158 "data_size": 63488 00:22:07.158 }, 00:22:07.158 { 00:22:07.158 "name": null, 00:22:07.158 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:07.158 "is_configured": false, 00:22:07.158 "data_offset": 0, 00:22:07.158 "data_size": 63488 00:22:07.158 }, 00:22:07.158 { 00:22:07.158 "name": "BaseBdev3", 00:22:07.158 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:07.158 "is_configured": true, 00:22:07.158 "data_offset": 2048, 00:22:07.158 "data_size": 63488 00:22:07.158 } 00:22:07.158 ] 00:22:07.158 }' 00:22:07.158 23:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.158 23:03:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.417 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.417 [2024-12-09 23:03:23.225261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.676 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.676 "name": "Existed_Raid", 00:22:07.676 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:07.676 "strip_size_kb": 64, 00:22:07.676 "state": "configuring", 00:22:07.676 "raid_level": "raid5f", 00:22:07.676 "superblock": true, 00:22:07.676 "num_base_bdevs": 3, 00:22:07.676 "num_base_bdevs_discovered": 1, 00:22:07.676 "num_base_bdevs_operational": 3, 00:22:07.676 "base_bdevs_list": [ 00:22:07.676 { 00:22:07.676 "name": null, 00:22:07.676 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:07.676 "is_configured": false, 00:22:07.676 "data_offset": 0, 00:22:07.676 "data_size": 63488 00:22:07.676 }, 00:22:07.676 { 00:22:07.676 "name": null, 00:22:07.677 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:07.677 "is_configured": false, 00:22:07.677 "data_offset": 0, 00:22:07.677 "data_size": 63488 00:22:07.677 }, 00:22:07.677 { 00:22:07.677 "name": "BaseBdev3", 00:22:07.677 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:07.677 "is_configured": true, 00:22:07.677 "data_offset": 2048, 00:22:07.677 "data_size": 63488 00:22:07.677 } 00:22:07.677 ] 00:22:07.677 }' 00:22:07.677 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.677 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.246 [2024-12-09 23:03:23.852671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.246 "name": "Existed_Raid", 00:22:08.246 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:08.246 "strip_size_kb": 64, 00:22:08.246 "state": "configuring", 00:22:08.246 "raid_level": "raid5f", 00:22:08.246 "superblock": true, 00:22:08.246 "num_base_bdevs": 3, 00:22:08.246 "num_base_bdevs_discovered": 2, 00:22:08.246 "num_base_bdevs_operational": 3, 00:22:08.246 "base_bdevs_list": [ 00:22:08.246 { 00:22:08.246 "name": null, 00:22:08.246 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:08.246 "is_configured": false, 00:22:08.246 "data_offset": 0, 00:22:08.246 "data_size": 63488 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "name": "BaseBdev2", 00:22:08.246 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:08.246 "is_configured": true, 00:22:08.246 "data_offset": 2048, 00:22:08.246 "data_size": 63488 00:22:08.246 }, 00:22:08.246 { 00:22:08.246 "name": "BaseBdev3", 00:22:08.246 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:08.246 "is_configured": true, 00:22:08.246 "data_offset": 2048, 00:22:08.246 "data_size": 63488 00:22:08.246 } 00:22:08.246 ] 00:22:08.246 }' 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.246 23:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.505 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.505 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.505 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.505 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:08.505 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04543a01-6c29-45cf-a7bc-83fccf721d87 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.766 [2024-12-09 23:03:24.458944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:08.766 [2024-12-09 23:03:24.459384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:08.766 [2024-12-09 23:03:24.459415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:08.766 [2024-12-09 23:03:24.459788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:08.766 NewBaseBdev 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.766 [2024-12-09 23:03:24.466484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:08.766 [2024-12-09 23:03:24.466517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:08.766 [2024-12-09 23:03:24.466856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.766 [ 00:22:08.766 { 00:22:08.766 "name": "NewBaseBdev", 00:22:08.766 "aliases": [ 00:22:08.766 "04543a01-6c29-45cf-a7bc-83fccf721d87" 00:22:08.766 ], 00:22:08.766 "product_name": "Malloc disk", 00:22:08.766 "block_size": 512, 00:22:08.766 "num_blocks": 65536, 00:22:08.766 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:08.766 "assigned_rate_limits": { 00:22:08.766 "rw_ios_per_sec": 0, 00:22:08.766 "rw_mbytes_per_sec": 0, 00:22:08.766 "r_mbytes_per_sec": 0, 00:22:08.766 "w_mbytes_per_sec": 0 00:22:08.766 }, 00:22:08.766 "claimed": true, 00:22:08.766 "claim_type": "exclusive_write", 00:22:08.766 "zoned": false, 00:22:08.766 "supported_io_types": { 00:22:08.766 "read": true, 00:22:08.766 "write": true, 00:22:08.766 "unmap": true, 00:22:08.766 "flush": true, 00:22:08.766 "reset": true, 00:22:08.766 "nvme_admin": false, 00:22:08.766 "nvme_io": false, 00:22:08.766 "nvme_io_md": false, 00:22:08.766 "write_zeroes": true, 00:22:08.766 "zcopy": true, 00:22:08.766 "get_zone_info": false, 00:22:08.766 "zone_management": false, 00:22:08.766 "zone_append": false, 00:22:08.766 "compare": false, 00:22:08.766 "compare_and_write": false, 00:22:08.766 "abort": true, 00:22:08.766 "seek_hole": false, 00:22:08.766 "seek_data": false, 00:22:08.766 "copy": true, 00:22:08.766 "nvme_iov_md": false 00:22:08.766 }, 00:22:08.766 "memory_domains": [ 00:22:08.766 { 00:22:08.766 "dma_device_id": "system", 00:22:08.766 "dma_device_type": 1 00:22:08.766 }, 00:22:08.766 { 00:22:08.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.766 "dma_device_type": 2 00:22:08.766 } 00:22:08.766 ], 00:22:08.766 "driver_specific": {} 00:22:08.766 } 00:22:08.766 ] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.766 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.767 "name": "Existed_Raid", 00:22:08.767 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:08.767 "strip_size_kb": 64, 00:22:08.767 "state": "online", 00:22:08.767 "raid_level": "raid5f", 00:22:08.767 "superblock": true, 00:22:08.767 "num_base_bdevs": 3, 00:22:08.767 "num_base_bdevs_discovered": 3, 00:22:08.767 "num_base_bdevs_operational": 3, 00:22:08.767 "base_bdevs_list": [ 00:22:08.767 { 00:22:08.767 "name": "NewBaseBdev", 00:22:08.767 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:08.767 "is_configured": true, 00:22:08.767 "data_offset": 2048, 00:22:08.767 "data_size": 63488 00:22:08.767 }, 00:22:08.767 { 00:22:08.767 "name": "BaseBdev2", 00:22:08.767 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:08.767 "is_configured": true, 00:22:08.767 "data_offset": 2048, 00:22:08.767 "data_size": 63488 00:22:08.767 }, 00:22:08.767 { 00:22:08.767 "name": "BaseBdev3", 00:22:08.767 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:08.767 "is_configured": true, 00:22:08.767 "data_offset": 2048, 00:22:08.767 "data_size": 63488 00:22:08.767 } 00:22:08.767 ] 00:22:08.767 }' 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.767 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:09.350 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.351 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.351 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:09.351 [2024-12-09 23:03:24.933671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.351 23:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.351 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.351 "name": "Existed_Raid", 00:22:09.351 "aliases": [ 00:22:09.351 "ea7465e9-a491-4688-a919-8264c21de180" 00:22:09.351 ], 00:22:09.351 "product_name": "Raid Volume", 00:22:09.351 "block_size": 512, 00:22:09.351 "num_blocks": 126976, 00:22:09.351 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:09.351 "assigned_rate_limits": { 00:22:09.351 "rw_ios_per_sec": 0, 00:22:09.351 "rw_mbytes_per_sec": 0, 00:22:09.351 "r_mbytes_per_sec": 0, 00:22:09.351 "w_mbytes_per_sec": 0 00:22:09.351 }, 00:22:09.351 "claimed": false, 00:22:09.351 "zoned": false, 00:22:09.351 "supported_io_types": { 00:22:09.351 "read": true, 00:22:09.351 "write": true, 00:22:09.351 "unmap": false, 00:22:09.351 "flush": false, 00:22:09.351 "reset": true, 00:22:09.351 "nvme_admin": false, 00:22:09.351 "nvme_io": false, 00:22:09.351 "nvme_io_md": false, 00:22:09.351 "write_zeroes": true, 00:22:09.351 "zcopy": false, 00:22:09.351 "get_zone_info": false, 00:22:09.351 "zone_management": false, 00:22:09.351 "zone_append": false, 00:22:09.351 "compare": false, 00:22:09.351 "compare_and_write": false, 00:22:09.351 "abort": false, 00:22:09.351 "seek_hole": false, 00:22:09.351 "seek_data": false, 00:22:09.351 "copy": false, 00:22:09.351 "nvme_iov_md": false 00:22:09.351 }, 00:22:09.351 "driver_specific": { 00:22:09.351 "raid": { 00:22:09.351 "uuid": "ea7465e9-a491-4688-a919-8264c21de180", 00:22:09.351 "strip_size_kb": 64, 00:22:09.351 "state": "online", 00:22:09.351 "raid_level": "raid5f", 00:22:09.351 "superblock": true, 00:22:09.351 "num_base_bdevs": 3, 00:22:09.351 "num_base_bdevs_discovered": 3, 00:22:09.351 "num_base_bdevs_operational": 3, 00:22:09.351 "base_bdevs_list": [ 00:22:09.351 { 00:22:09.351 "name": "NewBaseBdev", 00:22:09.351 "uuid": "04543a01-6c29-45cf-a7bc-83fccf721d87", 00:22:09.351 "is_configured": true, 00:22:09.351 "data_offset": 2048, 00:22:09.351 "data_size": 63488 00:22:09.351 }, 00:22:09.351 { 00:22:09.351 "name": "BaseBdev2", 00:22:09.351 "uuid": "ccf3e53a-b659-4924-a9ce-d3fb06290383", 00:22:09.351 "is_configured": true, 00:22:09.351 "data_offset": 2048, 00:22:09.351 "data_size": 63488 00:22:09.351 }, 00:22:09.351 { 00:22:09.351 "name": "BaseBdev3", 00:22:09.351 "uuid": "0dd23099-c0e6-4373-9c8e-3b0fc03a3e98", 00:22:09.351 "is_configured": true, 00:22:09.351 "data_offset": 2048, 00:22:09.351 "data_size": 63488 00:22:09.351 } 00:22:09.351 ] 00:22:09.351 } 00:22:09.351 } 00:22:09.351 }' 00:22:09.351 23:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:09.351 BaseBdev2 00:22:09.351 BaseBdev3' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.351 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.611 [2024-12-09 23:03:25.220987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.611 [2024-12-09 23:03:25.221027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.611 [2024-12-09 23:03:25.221126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.611 [2024-12-09 23:03:25.221474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.611 [2024-12-09 23:03:25.221495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81174 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81174 ']' 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81174 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81174 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.611 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81174' 00:22:09.611 killing process with pid 81174 00:22:09.612 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81174 00:22:09.612 [2024-12-09 23:03:25.270925] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.612 23:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81174 00:22:09.871 [2024-12-09 23:03:25.636525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:11.247 23:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:11.247 00:22:11.247 real 0m11.270s 00:22:11.247 user 0m17.709s 00:22:11.247 sys 0m1.973s 00:22:11.247 23:03:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.247 23:03:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.247 ************************************ 00:22:11.247 END TEST raid5f_state_function_test_sb 00:22:11.247 ************************************ 00:22:11.247 23:03:26 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:11.247 23:03:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:11.247 23:03:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.247 23:03:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:11.247 ************************************ 00:22:11.247 START TEST raid5f_superblock_test 00:22:11.247 ************************************ 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81808 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81808 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81808 ']' 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.247 23:03:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.247 [2024-12-09 23:03:27.047371] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:22:11.247 [2024-12-09 23:03:27.047681] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81808 ] 00:22:11.506 [2024-12-09 23:03:27.216234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.506 [2024-12-09 23:03:27.342074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.794 [2024-12-09 23:03:27.557561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:11.794 [2024-12-09 23:03:27.557738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.367 malloc1 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.367 [2024-12-09 23:03:27.978931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:12.367 [2024-12-09 23:03:27.979066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.367 [2024-12-09 23:03:27.979113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:12.367 [2024-12-09 23:03:27.979148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.367 [2024-12-09 23:03:27.981596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.367 [2024-12-09 23:03:27.981683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:12.367 pt1 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.367 malloc2 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.367 [2024-12-09 23:03:28.038729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:12.367 [2024-12-09 23:03:28.038807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.367 [2024-12-09 23:03:28.038835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:12.367 [2024-12-09 23:03:28.038846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.367 [2024-12-09 23:03:28.041423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.367 [2024-12-09 23:03:28.041488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:12.367 pt2 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.367 malloc3 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.367 [2024-12-09 23:03:28.106943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:12.367 [2024-12-09 23:03:28.107008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.367 [2024-12-09 23:03:28.107049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:12.367 [2024-12-09 23:03:28.107060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.367 [2024-12-09 23:03:28.109455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.367 [2024-12-09 23:03:28.109504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:12.367 pt3 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.367 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.368 [2024-12-09 23:03:28.118984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:12.368 [2024-12-09 23:03:28.121043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:12.368 [2024-12-09 23:03:28.121123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:12.368 [2024-12-09 23:03:28.121336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:12.368 [2024-12-09 23:03:28.121360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:12.368 [2024-12-09 23:03:28.121690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:12.368 [2024-12-09 23:03:28.127794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:12.368 [2024-12-09 23:03:28.127860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:12.368 [2024-12-09 23:03:28.128141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.368 "name": "raid_bdev1", 00:22:12.368 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:12.368 "strip_size_kb": 64, 00:22:12.368 "state": "online", 00:22:12.368 "raid_level": "raid5f", 00:22:12.368 "superblock": true, 00:22:12.368 "num_base_bdevs": 3, 00:22:12.368 "num_base_bdevs_discovered": 3, 00:22:12.368 "num_base_bdevs_operational": 3, 00:22:12.368 "base_bdevs_list": [ 00:22:12.368 { 00:22:12.368 "name": "pt1", 00:22:12.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:12.368 "is_configured": true, 00:22:12.368 "data_offset": 2048, 00:22:12.368 "data_size": 63488 00:22:12.368 }, 00:22:12.368 { 00:22:12.368 "name": "pt2", 00:22:12.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:12.368 "is_configured": true, 00:22:12.368 "data_offset": 2048, 00:22:12.368 "data_size": 63488 00:22:12.368 }, 00:22:12.368 { 00:22:12.368 "name": "pt3", 00:22:12.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:12.368 "is_configured": true, 00:22:12.368 "data_offset": 2048, 00:22:12.368 "data_size": 63488 00:22:12.368 } 00:22:12.368 ] 00:22:12.368 }' 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.368 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:12.971 [2024-12-09 23:03:28.595191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.971 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:12.971 "name": "raid_bdev1", 00:22:12.971 "aliases": [ 00:22:12.971 "650d7362-8be3-4ece-9f60-74e4f26d8889" 00:22:12.971 ], 00:22:12.971 "product_name": "Raid Volume", 00:22:12.971 "block_size": 512, 00:22:12.971 "num_blocks": 126976, 00:22:12.971 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:12.971 "assigned_rate_limits": { 00:22:12.971 "rw_ios_per_sec": 0, 00:22:12.971 "rw_mbytes_per_sec": 0, 00:22:12.971 "r_mbytes_per_sec": 0, 00:22:12.971 "w_mbytes_per_sec": 0 00:22:12.971 }, 00:22:12.971 "claimed": false, 00:22:12.971 "zoned": false, 00:22:12.971 "supported_io_types": { 00:22:12.971 "read": true, 00:22:12.972 "write": true, 00:22:12.972 "unmap": false, 00:22:12.972 "flush": false, 00:22:12.972 "reset": true, 00:22:12.972 "nvme_admin": false, 00:22:12.972 "nvme_io": false, 00:22:12.972 "nvme_io_md": false, 00:22:12.972 "write_zeroes": true, 00:22:12.972 "zcopy": false, 00:22:12.972 "get_zone_info": false, 00:22:12.972 "zone_management": false, 00:22:12.972 "zone_append": false, 00:22:12.972 "compare": false, 00:22:12.972 "compare_and_write": false, 00:22:12.972 "abort": false, 00:22:12.972 "seek_hole": false, 00:22:12.972 "seek_data": false, 00:22:12.972 "copy": false, 00:22:12.972 "nvme_iov_md": false 00:22:12.972 }, 00:22:12.972 "driver_specific": { 00:22:12.972 "raid": { 00:22:12.972 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:12.972 "strip_size_kb": 64, 00:22:12.972 "state": "online", 00:22:12.972 "raid_level": "raid5f", 00:22:12.972 "superblock": true, 00:22:12.972 "num_base_bdevs": 3, 00:22:12.972 "num_base_bdevs_discovered": 3, 00:22:12.972 "num_base_bdevs_operational": 3, 00:22:12.972 "base_bdevs_list": [ 00:22:12.972 { 00:22:12.972 "name": "pt1", 00:22:12.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:12.972 "is_configured": true, 00:22:12.972 "data_offset": 2048, 00:22:12.972 "data_size": 63488 00:22:12.972 }, 00:22:12.972 { 00:22:12.972 "name": "pt2", 00:22:12.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:12.972 "is_configured": true, 00:22:12.972 "data_offset": 2048, 00:22:12.972 "data_size": 63488 00:22:12.972 }, 00:22:12.972 { 00:22:12.972 "name": "pt3", 00:22:12.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:12.972 "is_configured": true, 00:22:12.972 "data_offset": 2048, 00:22:12.972 "data_size": 63488 00:22:12.972 } 00:22:12.972 ] 00:22:12.972 } 00:22:12.972 } 00:22:12.972 }' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:12.972 pt2 00:22:12.972 pt3' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.972 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 [2024-12-09 23:03:28.874717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=650d7362-8be3-4ece-9f60-74e4f26d8889 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 650d7362-8be3-4ece-9f60-74e4f26d8889 ']' 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 [2024-12-09 23:03:28.926398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:13.232 [2024-12-09 23:03:28.926434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:13.232 [2024-12-09 23:03:28.926544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:13.232 [2024-12-09 23:03:28.926627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:13.232 [2024-12-09 23:03:28.926638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.232 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.232 [2024-12-09 23:03:29.082231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:13.232 [2024-12-09 23:03:29.084286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:13.232 [2024-12-09 23:03:29.084351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:13.232 [2024-12-09 23:03:29.084407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:13.232 [2024-12-09 23:03:29.084481] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:13.232 [2024-12-09 23:03:29.084513] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:13.232 [2024-12-09 23:03:29.084532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:13.232 [2024-12-09 23:03:29.084543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:13.491 request: 00:22:13.491 { 00:22:13.491 "name": "raid_bdev1", 00:22:13.491 "raid_level": "raid5f", 00:22:13.491 "base_bdevs": [ 00:22:13.491 "malloc1", 00:22:13.491 "malloc2", 00:22:13.491 "malloc3" 00:22:13.491 ], 00:22:13.491 "strip_size_kb": 64, 00:22:13.491 "superblock": false, 00:22:13.491 "method": "bdev_raid_create", 00:22:13.491 "req_id": 1 00:22:13.491 } 00:22:13.491 Got JSON-RPC error response 00:22:13.491 response: 00:22:13.491 { 00:22:13.491 "code": -17, 00:22:13.491 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:13.491 } 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.491 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.491 [2024-12-09 23:03:29.142033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:13.491 [2024-12-09 23:03:29.142103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.492 [2024-12-09 23:03:29.142125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:13.492 [2024-12-09 23:03:29.142134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.492 [2024-12-09 23:03:29.144388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.492 [2024-12-09 23:03:29.144429] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:13.492 [2024-12-09 23:03:29.144542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:13.492 [2024-12-09 23:03:29.144615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.492 pt1 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.492 "name": "raid_bdev1", 00:22:13.492 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:13.492 "strip_size_kb": 64, 00:22:13.492 "state": "configuring", 00:22:13.492 "raid_level": "raid5f", 00:22:13.492 "superblock": true, 00:22:13.492 "num_base_bdevs": 3, 00:22:13.492 "num_base_bdevs_discovered": 1, 00:22:13.492 "num_base_bdevs_operational": 3, 00:22:13.492 "base_bdevs_list": [ 00:22:13.492 { 00:22:13.492 "name": "pt1", 00:22:13.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.492 "is_configured": true, 00:22:13.492 "data_offset": 2048, 00:22:13.492 "data_size": 63488 00:22:13.492 }, 00:22:13.492 { 00:22:13.492 "name": null, 00:22:13.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.492 "is_configured": false, 00:22:13.492 "data_offset": 2048, 00:22:13.492 "data_size": 63488 00:22:13.492 }, 00:22:13.492 { 00:22:13.492 "name": null, 00:22:13.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.492 "is_configured": false, 00:22:13.492 "data_offset": 2048, 00:22:13.492 "data_size": 63488 00:22:13.492 } 00:22:13.492 ] 00:22:13.492 }' 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.492 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.750 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:13.750 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:13.750 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.750 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.750 [2024-12-09 23:03:29.605283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:13.750 [2024-12-09 23:03:29.605452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.750 [2024-12-09 23:03:29.605518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:13.750 [2024-12-09 23:03:29.605569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.009 [2024-12-09 23:03:29.606126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.009 [2024-12-09 23:03:29.606197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.009 [2024-12-09 23:03:29.606324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:14.009 [2024-12-09 23:03:29.606382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:14.009 pt2 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.009 [2024-12-09 23:03:29.617305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.009 "name": "raid_bdev1", 00:22:14.009 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:14.009 "strip_size_kb": 64, 00:22:14.009 "state": "configuring", 00:22:14.009 "raid_level": "raid5f", 00:22:14.009 "superblock": true, 00:22:14.009 "num_base_bdevs": 3, 00:22:14.009 "num_base_bdevs_discovered": 1, 00:22:14.009 "num_base_bdevs_operational": 3, 00:22:14.009 "base_bdevs_list": [ 00:22:14.009 { 00:22:14.009 "name": "pt1", 00:22:14.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.009 "is_configured": true, 00:22:14.009 "data_offset": 2048, 00:22:14.009 "data_size": 63488 00:22:14.009 }, 00:22:14.009 { 00:22:14.009 "name": null, 00:22:14.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.009 "is_configured": false, 00:22:14.009 "data_offset": 0, 00:22:14.009 "data_size": 63488 00:22:14.009 }, 00:22:14.009 { 00:22:14.009 "name": null, 00:22:14.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.009 "is_configured": false, 00:22:14.009 "data_offset": 2048, 00:22:14.009 "data_size": 63488 00:22:14.009 } 00:22:14.009 ] 00:22:14.009 }' 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.009 23:03:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.269 [2024-12-09 23:03:30.092483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:14.269 [2024-12-09 23:03:30.092575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.269 [2024-12-09 23:03:30.092597] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:14.269 [2024-12-09 23:03:30.092609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.269 [2024-12-09 23:03:30.093128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.269 [2024-12-09 23:03:30.093152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.269 [2024-12-09 23:03:30.093242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:14.269 [2024-12-09 23:03:30.093271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:14.269 pt2 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.269 [2024-12-09 23:03:30.104444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:14.269 [2024-12-09 23:03:30.104553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.269 [2024-12-09 23:03:30.104573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:14.269 [2024-12-09 23:03:30.104587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.269 [2024-12-09 23:03:30.105079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.269 [2024-12-09 23:03:30.105111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:14.269 [2024-12-09 23:03:30.105197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:14.269 [2024-12-09 23:03:30.105224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:14.269 [2024-12-09 23:03:30.105377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:14.269 [2024-12-09 23:03:30.105390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:14.269 [2024-12-09 23:03:30.105683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:14.269 [2024-12-09 23:03:30.112286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:14.269 [2024-12-09 23:03:30.112361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:14.269 [2024-12-09 23:03:30.112655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.269 pt3 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.269 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.529 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.529 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.529 "name": "raid_bdev1", 00:22:14.529 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:14.529 "strip_size_kb": 64, 00:22:14.529 "state": "online", 00:22:14.529 "raid_level": "raid5f", 00:22:14.529 "superblock": true, 00:22:14.529 "num_base_bdevs": 3, 00:22:14.529 "num_base_bdevs_discovered": 3, 00:22:14.529 "num_base_bdevs_operational": 3, 00:22:14.529 "base_bdevs_list": [ 00:22:14.529 { 00:22:14.529 "name": "pt1", 00:22:14.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.529 "is_configured": true, 00:22:14.529 "data_offset": 2048, 00:22:14.529 "data_size": 63488 00:22:14.529 }, 00:22:14.529 { 00:22:14.529 "name": "pt2", 00:22:14.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.529 "is_configured": true, 00:22:14.529 "data_offset": 2048, 00:22:14.529 "data_size": 63488 00:22:14.529 }, 00:22:14.529 { 00:22:14.529 "name": "pt3", 00:22:14.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.529 "is_configured": true, 00:22:14.529 "data_offset": 2048, 00:22:14.529 "data_size": 63488 00:22:14.529 } 00:22:14.529 ] 00:22:14.529 }' 00:22:14.529 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.529 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.812 [2024-12-09 23:03:30.575802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.812 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.812 "name": "raid_bdev1", 00:22:14.812 "aliases": [ 00:22:14.812 "650d7362-8be3-4ece-9f60-74e4f26d8889" 00:22:14.812 ], 00:22:14.812 "product_name": "Raid Volume", 00:22:14.812 "block_size": 512, 00:22:14.812 "num_blocks": 126976, 00:22:14.812 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:14.812 "assigned_rate_limits": { 00:22:14.812 "rw_ios_per_sec": 0, 00:22:14.812 "rw_mbytes_per_sec": 0, 00:22:14.812 "r_mbytes_per_sec": 0, 00:22:14.812 "w_mbytes_per_sec": 0 00:22:14.812 }, 00:22:14.812 "claimed": false, 00:22:14.812 "zoned": false, 00:22:14.812 "supported_io_types": { 00:22:14.812 "read": true, 00:22:14.812 "write": true, 00:22:14.812 "unmap": false, 00:22:14.812 "flush": false, 00:22:14.812 "reset": true, 00:22:14.812 "nvme_admin": false, 00:22:14.812 "nvme_io": false, 00:22:14.812 "nvme_io_md": false, 00:22:14.812 "write_zeroes": true, 00:22:14.812 "zcopy": false, 00:22:14.812 "get_zone_info": false, 00:22:14.812 "zone_management": false, 00:22:14.812 "zone_append": false, 00:22:14.812 "compare": false, 00:22:14.812 "compare_and_write": false, 00:22:14.812 "abort": false, 00:22:14.812 "seek_hole": false, 00:22:14.812 "seek_data": false, 00:22:14.812 "copy": false, 00:22:14.813 "nvme_iov_md": false 00:22:14.813 }, 00:22:14.813 "driver_specific": { 00:22:14.813 "raid": { 00:22:14.813 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:14.813 "strip_size_kb": 64, 00:22:14.813 "state": "online", 00:22:14.813 "raid_level": "raid5f", 00:22:14.813 "superblock": true, 00:22:14.813 "num_base_bdevs": 3, 00:22:14.813 "num_base_bdevs_discovered": 3, 00:22:14.813 "num_base_bdevs_operational": 3, 00:22:14.813 "base_bdevs_list": [ 00:22:14.813 { 00:22:14.813 "name": "pt1", 00:22:14.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.813 "is_configured": true, 00:22:14.813 "data_offset": 2048, 00:22:14.813 "data_size": 63488 00:22:14.813 }, 00:22:14.813 { 00:22:14.813 "name": "pt2", 00:22:14.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.813 "is_configured": true, 00:22:14.813 "data_offset": 2048, 00:22:14.813 "data_size": 63488 00:22:14.813 }, 00:22:14.813 { 00:22:14.813 "name": "pt3", 00:22:14.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.813 "is_configured": true, 00:22:14.813 "data_offset": 2048, 00:22:14.813 "data_size": 63488 00:22:14.813 } 00:22:14.813 ] 00:22:14.813 } 00:22:14.813 } 00:22:14.813 }' 00:22:14.813 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.813 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:14.813 pt2 00:22:14.813 pt3' 00:22:14.813 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:15.072 [2024-12-09 23:03:30.831333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 650d7362-8be3-4ece-9f60-74e4f26d8889 '!=' 650d7362-8be3-4ece-9f60-74e4f26d8889 ']' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 [2024-12-09 23:03:30.879089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.072 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.331 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.331 "name": "raid_bdev1", 00:22:15.331 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:15.331 "strip_size_kb": 64, 00:22:15.331 "state": "online", 00:22:15.331 "raid_level": "raid5f", 00:22:15.331 "superblock": true, 00:22:15.331 "num_base_bdevs": 3, 00:22:15.331 "num_base_bdevs_discovered": 2, 00:22:15.331 "num_base_bdevs_operational": 2, 00:22:15.331 "base_bdevs_list": [ 00:22:15.331 { 00:22:15.331 "name": null, 00:22:15.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.331 "is_configured": false, 00:22:15.331 "data_offset": 0, 00:22:15.331 "data_size": 63488 00:22:15.331 }, 00:22:15.331 { 00:22:15.331 "name": "pt2", 00:22:15.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.331 "is_configured": true, 00:22:15.331 "data_offset": 2048, 00:22:15.331 "data_size": 63488 00:22:15.331 }, 00:22:15.331 { 00:22:15.331 "name": "pt3", 00:22:15.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.331 "is_configured": true, 00:22:15.331 "data_offset": 2048, 00:22:15.331 "data_size": 63488 00:22:15.331 } 00:22:15.331 ] 00:22:15.331 }' 00:22:15.331 23:03:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.332 23:03:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 [2024-12-09 23:03:31.366282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.590 [2024-12-09 23:03:31.366391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.590 [2024-12-09 23:03:31.366526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.590 [2024-12-09 23:03:31.366625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.590 [2024-12-09 23:03:31.366683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.590 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 [2024-12-09 23:03:31.450107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.850 [2024-12-09 23:03:31.450249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.850 [2024-12-09 23:03:31.450291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:15.850 [2024-12-09 23:03:31.450327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.850 [2024-12-09 23:03:31.452925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.850 [2024-12-09 23:03:31.453017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.850 [2024-12-09 23:03:31.453141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:15.850 [2024-12-09 23:03:31.453228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.850 pt2 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.850 "name": "raid_bdev1", 00:22:15.850 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:15.850 "strip_size_kb": 64, 00:22:15.850 "state": "configuring", 00:22:15.850 "raid_level": "raid5f", 00:22:15.850 "superblock": true, 00:22:15.850 "num_base_bdevs": 3, 00:22:15.850 "num_base_bdevs_discovered": 1, 00:22:15.850 "num_base_bdevs_operational": 2, 00:22:15.850 "base_bdevs_list": [ 00:22:15.850 { 00:22:15.850 "name": null, 00:22:15.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.850 "is_configured": false, 00:22:15.850 "data_offset": 2048, 00:22:15.850 "data_size": 63488 00:22:15.850 }, 00:22:15.850 { 00:22:15.850 "name": "pt2", 00:22:15.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.850 "is_configured": true, 00:22:15.850 "data_offset": 2048, 00:22:15.850 "data_size": 63488 00:22:15.850 }, 00:22:15.850 { 00:22:15.850 "name": null, 00:22:15.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.850 "is_configured": false, 00:22:15.850 "data_offset": 2048, 00:22:15.850 "data_size": 63488 00:22:15.850 } 00:22:15.850 ] 00:22:15.850 }' 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.850 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.110 [2024-12-09 23:03:31.941295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:16.110 [2024-12-09 23:03:31.941447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.110 [2024-12-09 23:03:31.941493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:16.110 [2024-12-09 23:03:31.941523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.110 [2024-12-09 23:03:31.942058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.110 [2024-12-09 23:03:31.942087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:16.110 [2024-12-09 23:03:31.942177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:16.110 [2024-12-09 23:03:31.942209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:16.110 [2024-12-09 23:03:31.942341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:16.110 [2024-12-09 23:03:31.942361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:16.110 [2024-12-09 23:03:31.942646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:16.110 [2024-12-09 23:03:31.948591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:16.110 [2024-12-09 23:03:31.948614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:16.110 [2024-12-09 23:03:31.948950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.110 pt3 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.110 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.369 23:03:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.370 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.370 "name": "raid_bdev1", 00:22:16.370 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:16.370 "strip_size_kb": 64, 00:22:16.370 "state": "online", 00:22:16.370 "raid_level": "raid5f", 00:22:16.370 "superblock": true, 00:22:16.370 "num_base_bdevs": 3, 00:22:16.370 "num_base_bdevs_discovered": 2, 00:22:16.370 "num_base_bdevs_operational": 2, 00:22:16.370 "base_bdevs_list": [ 00:22:16.370 { 00:22:16.370 "name": null, 00:22:16.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.370 "is_configured": false, 00:22:16.370 "data_offset": 2048, 00:22:16.370 "data_size": 63488 00:22:16.370 }, 00:22:16.370 { 00:22:16.370 "name": "pt2", 00:22:16.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.370 "is_configured": true, 00:22:16.370 "data_offset": 2048, 00:22:16.370 "data_size": 63488 00:22:16.370 }, 00:22:16.370 { 00:22:16.370 "name": "pt3", 00:22:16.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:16.370 "is_configured": true, 00:22:16.370 "data_offset": 2048, 00:22:16.370 "data_size": 63488 00:22:16.370 } 00:22:16.370 ] 00:22:16.370 }' 00:22:16.370 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.370 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.629 [2024-12-09 23:03:32.468073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.629 [2024-12-09 23:03:32.468200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.629 [2024-12-09 23:03:32.468330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.629 [2024-12-09 23:03:32.468453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.629 [2024-12-09 23:03:32.468553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:16.629 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.890 [2024-12-09 23:03:32.544000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.890 [2024-12-09 23:03:32.544189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.890 [2024-12-09 23:03:32.544221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:16.890 [2024-12-09 23:03:32.544232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.890 [2024-12-09 23:03:32.546985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.890 [2024-12-09 23:03:32.547035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.890 [2024-12-09 23:03:32.547137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:16.890 [2024-12-09 23:03:32.547198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:16.890 [2024-12-09 23:03:32.547379] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:16.890 [2024-12-09 23:03:32.547392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.890 [2024-12-09 23:03:32.547411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:16.890 [2024-12-09 23:03:32.547484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:16.890 pt1 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.890 "name": "raid_bdev1", 00:22:16.890 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:16.890 "strip_size_kb": 64, 00:22:16.890 "state": "configuring", 00:22:16.890 "raid_level": "raid5f", 00:22:16.890 "superblock": true, 00:22:16.890 "num_base_bdevs": 3, 00:22:16.890 "num_base_bdevs_discovered": 1, 00:22:16.890 "num_base_bdevs_operational": 2, 00:22:16.890 "base_bdevs_list": [ 00:22:16.890 { 00:22:16.890 "name": null, 00:22:16.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.890 "is_configured": false, 00:22:16.890 "data_offset": 2048, 00:22:16.890 "data_size": 63488 00:22:16.890 }, 00:22:16.890 { 00:22:16.890 "name": "pt2", 00:22:16.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.890 "is_configured": true, 00:22:16.890 "data_offset": 2048, 00:22:16.890 "data_size": 63488 00:22:16.890 }, 00:22:16.890 { 00:22:16.890 "name": null, 00:22:16.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:16.890 "is_configured": false, 00:22:16.890 "data_offset": 2048, 00:22:16.890 "data_size": 63488 00:22:16.890 } 00:22:16.890 ] 00:22:16.890 }' 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.890 23:03:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.152 23:03:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:17.152 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:17.152 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.152 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.412 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.412 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:17.412 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:17.412 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.412 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.412 [2024-12-09 23:03:33.055204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:17.412 [2024-12-09 23:03:33.055356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.412 [2024-12-09 23:03:33.055416] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:17.412 [2024-12-09 23:03:33.055455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.412 [2024-12-09 23:03:33.056071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.412 [2024-12-09 23:03:33.056143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.412 [2024-12-09 23:03:33.056279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:17.412 [2024-12-09 23:03:33.056341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:17.412 [2024-12-09 23:03:33.056573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:17.413 [2024-12-09 23:03:33.056624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:17.413 [2024-12-09 23:03:33.056975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:17.413 [2024-12-09 23:03:33.065051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:17.413 [2024-12-09 23:03:33.065139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:17.413 [2024-12-09 23:03:33.065528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.413 pt3 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.413 "name": "raid_bdev1", 00:22:17.413 "uuid": "650d7362-8be3-4ece-9f60-74e4f26d8889", 00:22:17.413 "strip_size_kb": 64, 00:22:17.413 "state": "online", 00:22:17.413 "raid_level": "raid5f", 00:22:17.413 "superblock": true, 00:22:17.413 "num_base_bdevs": 3, 00:22:17.413 "num_base_bdevs_discovered": 2, 00:22:17.413 "num_base_bdevs_operational": 2, 00:22:17.413 "base_bdevs_list": [ 00:22:17.413 { 00:22:17.413 "name": null, 00:22:17.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.413 "is_configured": false, 00:22:17.413 "data_offset": 2048, 00:22:17.413 "data_size": 63488 00:22:17.413 }, 00:22:17.413 { 00:22:17.413 "name": "pt2", 00:22:17.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.413 "is_configured": true, 00:22:17.413 "data_offset": 2048, 00:22:17.413 "data_size": 63488 00:22:17.413 }, 00:22:17.413 { 00:22:17.413 "name": "pt3", 00:22:17.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.413 "is_configured": true, 00:22:17.413 "data_offset": 2048, 00:22:17.413 "data_size": 63488 00:22:17.413 } 00:22:17.413 ] 00:22:17.413 }' 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.413 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.980 [2024-12-09 23:03:33.605714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 650d7362-8be3-4ece-9f60-74e4f26d8889 '!=' 650d7362-8be3-4ece-9f60-74e4f26d8889 ']' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81808 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81808 ']' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81808 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81808 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.980 killing process with pid 81808 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81808' 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81808 00:22:17.980 [2024-12-09 23:03:33.692618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.980 [2024-12-09 23:03:33.692738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.980 [2024-12-09 23:03:33.692813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.980 23:03:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81808 00:22:17.980 [2024-12-09 23:03:33.692828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:18.240 [2024-12-09 23:03:34.030651] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.627 23:03:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:19.627 00:22:19.627 real 0m8.277s 00:22:19.627 user 0m12.895s 00:22:19.627 sys 0m1.530s 00:22:19.627 23:03:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.628 ************************************ 00:22:19.628 END TEST raid5f_superblock_test 00:22:19.628 ************************************ 00:22:19.628 23:03:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.628 23:03:35 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:22:19.628 23:03:35 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:22:19.628 23:03:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:19.628 23:03:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.628 23:03:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.628 ************************************ 00:22:19.628 START TEST raid5f_rebuild_test 00:22:19.628 ************************************ 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82256 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82256 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82256 ']' 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.628 23:03:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.628 [2024-12-09 23:03:35.385771] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:22:19.628 [2024-12-09 23:03:35.385992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.628 Zero copy mechanism will not be used. 00:22:19.628 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82256 ] 00:22:19.888 [2024-12-09 23:03:35.564724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.888 [2024-12-09 23:03:35.687288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.147 [2024-12-09 23:03:35.925549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.147 [2024-12-09 23:03:35.925689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.407 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.407 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:22:20.407 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.407 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:20.407 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.407 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 BaseBdev1_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 [2024-12-09 23:03:36.302037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.666 [2024-12-09 23:03:36.302120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.666 [2024-12-09 23:03:36.302152] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:20.666 [2024-12-09 23:03:36.302167] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.666 [2024-12-09 23:03:36.304747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.666 [2024-12-09 23:03:36.304797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.666 BaseBdev1 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 BaseBdev2_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 [2024-12-09 23:03:36.363646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:20.666 [2024-12-09 23:03:36.363717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.666 [2024-12-09 23:03:36.363738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:20.666 [2024-12-09 23:03:36.363749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.666 [2024-12-09 23:03:36.366107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.666 [2024-12-09 23:03:36.366218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.666 BaseBdev2 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 BaseBdev3_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 [2024-12-09 23:03:36.437506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:20.666 [2024-12-09 23:03:36.437570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.666 [2024-12-09 23:03:36.437594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:20.666 [2024-12-09 23:03:36.437605] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.666 [2024-12-09 23:03:36.439783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.666 [2024-12-09 23:03:36.439824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:20.666 BaseBdev3 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 spare_malloc 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 spare_delay 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 [2024-12-09 23:03:36.503244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.666 [2024-12-09 23:03:36.503324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.666 [2024-12-09 23:03:36.503351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:20.666 [2024-12-09 23:03:36.503363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.666 [2024-12-09 23:03:36.505951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.666 [2024-12-09 23:03:36.506002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.666 spare 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.666 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.666 [2024-12-09 23:03:36.515295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.667 [2024-12-09 23:03:36.517519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.667 [2024-12-09 23:03:36.517591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:20.667 [2024-12-09 23:03:36.517691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:20.667 [2024-12-09 23:03:36.517704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:20.667 [2024-12-09 23:03:36.517998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:20.926 [2024-12-09 23:03:36.525094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:20.926 [2024-12-09 23:03:36.525164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:20.926 [2024-12-09 23:03:36.525483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.926 "name": "raid_bdev1", 00:22:20.926 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:20.926 "strip_size_kb": 64, 00:22:20.926 "state": "online", 00:22:20.926 "raid_level": "raid5f", 00:22:20.926 "superblock": false, 00:22:20.926 "num_base_bdevs": 3, 00:22:20.926 "num_base_bdevs_discovered": 3, 00:22:20.926 "num_base_bdevs_operational": 3, 00:22:20.926 "base_bdevs_list": [ 00:22:20.926 { 00:22:20.926 "name": "BaseBdev1", 00:22:20.926 "uuid": "421c3bdb-30a6-594f-8917-b48435dc1b34", 00:22:20.926 "is_configured": true, 00:22:20.926 "data_offset": 0, 00:22:20.926 "data_size": 65536 00:22:20.926 }, 00:22:20.926 { 00:22:20.926 "name": "BaseBdev2", 00:22:20.926 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:20.926 "is_configured": true, 00:22:20.926 "data_offset": 0, 00:22:20.926 "data_size": 65536 00:22:20.926 }, 00:22:20.926 { 00:22:20.926 "name": "BaseBdev3", 00:22:20.926 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:20.926 "is_configured": true, 00:22:20.926 "data_offset": 0, 00:22:20.926 "data_size": 65536 00:22:20.926 } 00:22:20.926 ] 00:22:20.926 }' 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.926 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.185 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:21.185 23:03:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:21.185 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.185 23:03:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.185 [2024-12-09 23:03:36.988537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.185 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.185 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:22:21.185 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.185 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.185 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.185 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:21.444 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.445 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:21.704 [2024-12-09 23:03:37.315774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:21.704 /dev/nbd0 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.704 1+0 records in 00:22:21.704 1+0 records out 00:22:21.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638088 s, 6.4 MB/s 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.704 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:21.705 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:22.274 512+0 records in 00:22:22.274 512+0 records out 00:22:22.274 67108864 bytes (67 MB, 64 MiB) copied, 0.435966 s, 154 MB/s 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.274 23:03:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:22.274 [2024-12-09 23:03:38.080719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.274 [2024-12-09 23:03:38.093101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.274 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.533 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.533 "name": "raid_bdev1", 00:22:22.533 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:22.533 "strip_size_kb": 64, 00:22:22.533 "state": "online", 00:22:22.533 "raid_level": "raid5f", 00:22:22.533 "superblock": false, 00:22:22.533 "num_base_bdevs": 3, 00:22:22.533 "num_base_bdevs_discovered": 2, 00:22:22.533 "num_base_bdevs_operational": 2, 00:22:22.533 "base_bdevs_list": [ 00:22:22.533 { 00:22:22.533 "name": null, 00:22:22.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.533 "is_configured": false, 00:22:22.533 "data_offset": 0, 00:22:22.533 "data_size": 65536 00:22:22.533 }, 00:22:22.533 { 00:22:22.533 "name": "BaseBdev2", 00:22:22.533 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:22.533 "is_configured": true, 00:22:22.533 "data_offset": 0, 00:22:22.533 "data_size": 65536 00:22:22.533 }, 00:22:22.533 { 00:22:22.533 "name": "BaseBdev3", 00:22:22.533 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:22.533 "is_configured": true, 00:22:22.533 "data_offset": 0, 00:22:22.533 "data_size": 65536 00:22:22.533 } 00:22:22.533 ] 00:22:22.533 }' 00:22:22.533 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.533 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:22.793 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 [2024-12-09 23:03:38.544455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.793 [2024-12-09 23:03:38.567670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:22:22.793 23:03:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 23:03:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:22.793 [2024-12-09 23:03:38.578330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.733 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.992 "name": "raid_bdev1", 00:22:23.992 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:23.992 "strip_size_kb": 64, 00:22:23.992 "state": "online", 00:22:23.992 "raid_level": "raid5f", 00:22:23.992 "superblock": false, 00:22:23.992 "num_base_bdevs": 3, 00:22:23.992 "num_base_bdevs_discovered": 3, 00:22:23.992 "num_base_bdevs_operational": 3, 00:22:23.992 "process": { 00:22:23.992 "type": "rebuild", 00:22:23.992 "target": "spare", 00:22:23.992 "progress": { 00:22:23.992 "blocks": 18432, 00:22:23.992 "percent": 14 00:22:23.992 } 00:22:23.992 }, 00:22:23.992 "base_bdevs_list": [ 00:22:23.992 { 00:22:23.992 "name": "spare", 00:22:23.992 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:23.992 "is_configured": true, 00:22:23.992 "data_offset": 0, 00:22:23.992 "data_size": 65536 00:22:23.992 }, 00:22:23.992 { 00:22:23.992 "name": "BaseBdev2", 00:22:23.992 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:23.992 "is_configured": true, 00:22:23.992 "data_offset": 0, 00:22:23.992 "data_size": 65536 00:22:23.992 }, 00:22:23.992 { 00:22:23.992 "name": "BaseBdev3", 00:22:23.992 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:23.992 "is_configured": true, 00:22:23.992 "data_offset": 0, 00:22:23.992 "data_size": 65536 00:22:23.992 } 00:22:23.992 ] 00:22:23.992 }' 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 [2024-12-09 23:03:39.730539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.992 [2024-12-09 23:03:39.790724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:23.992 [2024-12-09 23:03:39.790933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.992 [2024-12-09 23:03:39.790976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.992 [2024-12-09 23:03:39.790988] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.255 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.255 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.255 "name": "raid_bdev1", 00:22:24.255 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:24.255 "strip_size_kb": 64, 00:22:24.255 "state": "online", 00:22:24.255 "raid_level": "raid5f", 00:22:24.255 "superblock": false, 00:22:24.255 "num_base_bdevs": 3, 00:22:24.255 "num_base_bdevs_discovered": 2, 00:22:24.255 "num_base_bdevs_operational": 2, 00:22:24.255 "base_bdevs_list": [ 00:22:24.255 { 00:22:24.255 "name": null, 00:22:24.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.255 "is_configured": false, 00:22:24.255 "data_offset": 0, 00:22:24.255 "data_size": 65536 00:22:24.255 }, 00:22:24.255 { 00:22:24.255 "name": "BaseBdev2", 00:22:24.255 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:24.255 "is_configured": true, 00:22:24.255 "data_offset": 0, 00:22:24.255 "data_size": 65536 00:22:24.255 }, 00:22:24.255 { 00:22:24.255 "name": "BaseBdev3", 00:22:24.255 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:24.255 "is_configured": true, 00:22:24.255 "data_offset": 0, 00:22:24.255 "data_size": 65536 00:22:24.255 } 00:22:24.255 ] 00:22:24.255 }' 00:22:24.255 23:03:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.255 23:03:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.530 "name": "raid_bdev1", 00:22:24.530 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:24.530 "strip_size_kb": 64, 00:22:24.530 "state": "online", 00:22:24.530 "raid_level": "raid5f", 00:22:24.530 "superblock": false, 00:22:24.530 "num_base_bdevs": 3, 00:22:24.530 "num_base_bdevs_discovered": 2, 00:22:24.530 "num_base_bdevs_operational": 2, 00:22:24.530 "base_bdevs_list": [ 00:22:24.530 { 00:22:24.530 "name": null, 00:22:24.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.530 "is_configured": false, 00:22:24.530 "data_offset": 0, 00:22:24.530 "data_size": 65536 00:22:24.530 }, 00:22:24.530 { 00:22:24.530 "name": "BaseBdev2", 00:22:24.530 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:24.530 "is_configured": true, 00:22:24.530 "data_offset": 0, 00:22:24.530 "data_size": 65536 00:22:24.530 }, 00:22:24.530 { 00:22:24.530 "name": "BaseBdev3", 00:22:24.530 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:24.530 "is_configured": true, 00:22:24.530 "data_offset": 0, 00:22:24.530 "data_size": 65536 00:22:24.530 } 00:22:24.530 ] 00:22:24.530 }' 00:22:24.530 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.791 [2024-12-09 23:03:40.449981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.791 [2024-12-09 23:03:40.468969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.791 23:03:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:24.791 [2024-12-09 23:03:40.478006] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.726 "name": "raid_bdev1", 00:22:25.726 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:25.726 "strip_size_kb": 64, 00:22:25.726 "state": "online", 00:22:25.726 "raid_level": "raid5f", 00:22:25.726 "superblock": false, 00:22:25.726 "num_base_bdevs": 3, 00:22:25.726 "num_base_bdevs_discovered": 3, 00:22:25.726 "num_base_bdevs_operational": 3, 00:22:25.726 "process": { 00:22:25.726 "type": "rebuild", 00:22:25.726 "target": "spare", 00:22:25.726 "progress": { 00:22:25.726 "blocks": 20480, 00:22:25.726 "percent": 15 00:22:25.726 } 00:22:25.726 }, 00:22:25.726 "base_bdevs_list": [ 00:22:25.726 { 00:22:25.726 "name": "spare", 00:22:25.726 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:25.726 "is_configured": true, 00:22:25.726 "data_offset": 0, 00:22:25.726 "data_size": 65536 00:22:25.726 }, 00:22:25.726 { 00:22:25.726 "name": "BaseBdev2", 00:22:25.726 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:25.726 "is_configured": true, 00:22:25.726 "data_offset": 0, 00:22:25.726 "data_size": 65536 00:22:25.726 }, 00:22:25.726 { 00:22:25.726 "name": "BaseBdev3", 00:22:25.726 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:25.726 "is_configured": true, 00:22:25.726 "data_offset": 0, 00:22:25.726 "data_size": 65536 00:22:25.726 } 00:22:25.726 ] 00:22:25.726 }' 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.726 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=579 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.985 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.986 "name": "raid_bdev1", 00:22:25.986 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:25.986 "strip_size_kb": 64, 00:22:25.986 "state": "online", 00:22:25.986 "raid_level": "raid5f", 00:22:25.986 "superblock": false, 00:22:25.986 "num_base_bdevs": 3, 00:22:25.986 "num_base_bdevs_discovered": 3, 00:22:25.986 "num_base_bdevs_operational": 3, 00:22:25.986 "process": { 00:22:25.986 "type": "rebuild", 00:22:25.986 "target": "spare", 00:22:25.986 "progress": { 00:22:25.986 "blocks": 22528, 00:22:25.986 "percent": 17 00:22:25.986 } 00:22:25.986 }, 00:22:25.986 "base_bdevs_list": [ 00:22:25.986 { 00:22:25.986 "name": "spare", 00:22:25.986 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:25.986 "is_configured": true, 00:22:25.986 "data_offset": 0, 00:22:25.986 "data_size": 65536 00:22:25.986 }, 00:22:25.986 { 00:22:25.986 "name": "BaseBdev2", 00:22:25.986 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:25.986 "is_configured": true, 00:22:25.986 "data_offset": 0, 00:22:25.986 "data_size": 65536 00:22:25.986 }, 00:22:25.986 { 00:22:25.986 "name": "BaseBdev3", 00:22:25.986 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:25.986 "is_configured": true, 00:22:25.986 "data_offset": 0, 00:22:25.986 "data_size": 65536 00:22:25.986 } 00:22:25.986 ] 00:22:25.986 }' 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.986 23:03:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.923 23:03:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.182 "name": "raid_bdev1", 00:22:27.182 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:27.182 "strip_size_kb": 64, 00:22:27.182 "state": "online", 00:22:27.182 "raid_level": "raid5f", 00:22:27.182 "superblock": false, 00:22:27.182 "num_base_bdevs": 3, 00:22:27.182 "num_base_bdevs_discovered": 3, 00:22:27.182 "num_base_bdevs_operational": 3, 00:22:27.182 "process": { 00:22:27.182 "type": "rebuild", 00:22:27.182 "target": "spare", 00:22:27.182 "progress": { 00:22:27.182 "blocks": 45056, 00:22:27.182 "percent": 34 00:22:27.182 } 00:22:27.182 }, 00:22:27.182 "base_bdevs_list": [ 00:22:27.182 { 00:22:27.182 "name": "spare", 00:22:27.182 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:27.182 "is_configured": true, 00:22:27.182 "data_offset": 0, 00:22:27.182 "data_size": 65536 00:22:27.182 }, 00:22:27.182 { 00:22:27.182 "name": "BaseBdev2", 00:22:27.182 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:27.182 "is_configured": true, 00:22:27.182 "data_offset": 0, 00:22:27.182 "data_size": 65536 00:22:27.182 }, 00:22:27.182 { 00:22:27.182 "name": "BaseBdev3", 00:22:27.182 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:27.182 "is_configured": true, 00:22:27.182 "data_offset": 0, 00:22:27.182 "data_size": 65536 00:22:27.182 } 00:22:27.182 ] 00:22:27.182 }' 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.182 23:03:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.119 "name": "raid_bdev1", 00:22:28.119 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:28.119 "strip_size_kb": 64, 00:22:28.119 "state": "online", 00:22:28.119 "raid_level": "raid5f", 00:22:28.119 "superblock": false, 00:22:28.119 "num_base_bdevs": 3, 00:22:28.119 "num_base_bdevs_discovered": 3, 00:22:28.119 "num_base_bdevs_operational": 3, 00:22:28.119 "process": { 00:22:28.119 "type": "rebuild", 00:22:28.119 "target": "spare", 00:22:28.119 "progress": { 00:22:28.119 "blocks": 69632, 00:22:28.119 "percent": 53 00:22:28.119 } 00:22:28.119 }, 00:22:28.119 "base_bdevs_list": [ 00:22:28.119 { 00:22:28.119 "name": "spare", 00:22:28.119 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:28.119 "is_configured": true, 00:22:28.119 "data_offset": 0, 00:22:28.119 "data_size": 65536 00:22:28.119 }, 00:22:28.119 { 00:22:28.119 "name": "BaseBdev2", 00:22:28.119 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:28.119 "is_configured": true, 00:22:28.119 "data_offset": 0, 00:22:28.119 "data_size": 65536 00:22:28.119 }, 00:22:28.119 { 00:22:28.119 "name": "BaseBdev3", 00:22:28.119 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:28.119 "is_configured": true, 00:22:28.119 "data_offset": 0, 00:22:28.119 "data_size": 65536 00:22:28.119 } 00:22:28.119 ] 00:22:28.119 }' 00:22:28.119 23:03:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.377 23:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.377 23:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.377 23:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.377 23:03:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.318 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.318 "name": "raid_bdev1", 00:22:29.318 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:29.318 "strip_size_kb": 64, 00:22:29.318 "state": "online", 00:22:29.318 "raid_level": "raid5f", 00:22:29.318 "superblock": false, 00:22:29.319 "num_base_bdevs": 3, 00:22:29.319 "num_base_bdevs_discovered": 3, 00:22:29.319 "num_base_bdevs_operational": 3, 00:22:29.319 "process": { 00:22:29.319 "type": "rebuild", 00:22:29.319 "target": "spare", 00:22:29.319 "progress": { 00:22:29.319 "blocks": 92160, 00:22:29.319 "percent": 70 00:22:29.319 } 00:22:29.319 }, 00:22:29.319 "base_bdevs_list": [ 00:22:29.319 { 00:22:29.319 "name": "spare", 00:22:29.319 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:29.319 "is_configured": true, 00:22:29.319 "data_offset": 0, 00:22:29.319 "data_size": 65536 00:22:29.319 }, 00:22:29.319 { 00:22:29.319 "name": "BaseBdev2", 00:22:29.319 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:29.319 "is_configured": true, 00:22:29.319 "data_offset": 0, 00:22:29.319 "data_size": 65536 00:22:29.319 }, 00:22:29.319 { 00:22:29.319 "name": "BaseBdev3", 00:22:29.319 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:29.319 "is_configured": true, 00:22:29.319 "data_offset": 0, 00:22:29.319 "data_size": 65536 00:22:29.319 } 00:22:29.319 ] 00:22:29.319 }' 00:22:29.319 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.319 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.319 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.579 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.579 23:03:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.515 "name": "raid_bdev1", 00:22:30.515 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:30.515 "strip_size_kb": 64, 00:22:30.515 "state": "online", 00:22:30.515 "raid_level": "raid5f", 00:22:30.515 "superblock": false, 00:22:30.515 "num_base_bdevs": 3, 00:22:30.515 "num_base_bdevs_discovered": 3, 00:22:30.515 "num_base_bdevs_operational": 3, 00:22:30.515 "process": { 00:22:30.515 "type": "rebuild", 00:22:30.515 "target": "spare", 00:22:30.515 "progress": { 00:22:30.515 "blocks": 114688, 00:22:30.515 "percent": 87 00:22:30.515 } 00:22:30.515 }, 00:22:30.515 "base_bdevs_list": [ 00:22:30.515 { 00:22:30.515 "name": "spare", 00:22:30.515 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:30.515 "is_configured": true, 00:22:30.515 "data_offset": 0, 00:22:30.515 "data_size": 65536 00:22:30.515 }, 00:22:30.515 { 00:22:30.515 "name": "BaseBdev2", 00:22:30.515 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:30.515 "is_configured": true, 00:22:30.515 "data_offset": 0, 00:22:30.515 "data_size": 65536 00:22:30.515 }, 00:22:30.515 { 00:22:30.515 "name": "BaseBdev3", 00:22:30.515 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:30.515 "is_configured": true, 00:22:30.515 "data_offset": 0, 00:22:30.515 "data_size": 65536 00:22:30.515 } 00:22:30.515 ] 00:22:30.515 }' 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.515 23:03:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:31.456 [2024-12-09 23:03:46.944388] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:31.456 [2024-12-09 23:03:46.944674] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:31.456 [2024-12-09 23:03:46.944738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.716 "name": "raid_bdev1", 00:22:31.716 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:31.716 "strip_size_kb": 64, 00:22:31.716 "state": "online", 00:22:31.716 "raid_level": "raid5f", 00:22:31.716 "superblock": false, 00:22:31.716 "num_base_bdevs": 3, 00:22:31.716 "num_base_bdevs_discovered": 3, 00:22:31.716 "num_base_bdevs_operational": 3, 00:22:31.716 "base_bdevs_list": [ 00:22:31.716 { 00:22:31.716 "name": "spare", 00:22:31.716 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:31.716 "is_configured": true, 00:22:31.716 "data_offset": 0, 00:22:31.716 "data_size": 65536 00:22:31.716 }, 00:22:31.716 { 00:22:31.716 "name": "BaseBdev2", 00:22:31.716 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:31.716 "is_configured": true, 00:22:31.716 "data_offset": 0, 00:22:31.716 "data_size": 65536 00:22:31.716 }, 00:22:31.716 { 00:22:31.716 "name": "BaseBdev3", 00:22:31.716 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:31.716 "is_configured": true, 00:22:31.716 "data_offset": 0, 00:22:31.716 "data_size": 65536 00:22:31.716 } 00:22:31.716 ] 00:22:31.716 }' 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.716 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.716 "name": "raid_bdev1", 00:22:31.716 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:31.716 "strip_size_kb": 64, 00:22:31.716 "state": "online", 00:22:31.716 "raid_level": "raid5f", 00:22:31.716 "superblock": false, 00:22:31.716 "num_base_bdevs": 3, 00:22:31.716 "num_base_bdevs_discovered": 3, 00:22:31.716 "num_base_bdevs_operational": 3, 00:22:31.716 "base_bdevs_list": [ 00:22:31.716 { 00:22:31.716 "name": "spare", 00:22:31.716 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:31.716 "is_configured": true, 00:22:31.716 "data_offset": 0, 00:22:31.716 "data_size": 65536 00:22:31.716 }, 00:22:31.716 { 00:22:31.716 "name": "BaseBdev2", 00:22:31.716 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:31.716 "is_configured": true, 00:22:31.716 "data_offset": 0, 00:22:31.716 "data_size": 65536 00:22:31.716 }, 00:22:31.716 { 00:22:31.716 "name": "BaseBdev3", 00:22:31.716 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:31.716 "is_configured": true, 00:22:31.716 "data_offset": 0, 00:22:31.716 "data_size": 65536 00:22:31.716 } 00:22:31.716 ] 00:22:31.716 }' 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.978 "name": "raid_bdev1", 00:22:31.978 "uuid": "039d2553-611b-44fb-9885-e2d0955c1e53", 00:22:31.978 "strip_size_kb": 64, 00:22:31.978 "state": "online", 00:22:31.978 "raid_level": "raid5f", 00:22:31.978 "superblock": false, 00:22:31.978 "num_base_bdevs": 3, 00:22:31.978 "num_base_bdevs_discovered": 3, 00:22:31.978 "num_base_bdevs_operational": 3, 00:22:31.978 "base_bdevs_list": [ 00:22:31.978 { 00:22:31.978 "name": "spare", 00:22:31.978 "uuid": "532fa0aa-581f-5001-aef4-1d177105ff24", 00:22:31.978 "is_configured": true, 00:22:31.978 "data_offset": 0, 00:22:31.978 "data_size": 65536 00:22:31.978 }, 00:22:31.978 { 00:22:31.978 "name": "BaseBdev2", 00:22:31.978 "uuid": "4e516fb6-0b5f-54c0-965c-44058fa5f165", 00:22:31.978 "is_configured": true, 00:22:31.978 "data_offset": 0, 00:22:31.978 "data_size": 65536 00:22:31.978 }, 00:22:31.978 { 00:22:31.978 "name": "BaseBdev3", 00:22:31.978 "uuid": "200481fb-151c-57bc-8100-e60f6c37e8a6", 00:22:31.978 "is_configured": true, 00:22:31.978 "data_offset": 0, 00:22:31.978 "data_size": 65536 00:22:31.978 } 00:22:31.978 ] 00:22:31.978 }' 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.978 23:03:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 [2024-12-09 23:03:48.112620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.546 [2024-12-09 23:03:48.112670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.546 [2024-12-09 23:03:48.112782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.546 [2024-12-09 23:03:48.112883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.546 [2024-12-09 23:03:48.112903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.546 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:32.812 /dev/nbd0 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:32.812 1+0 records in 00:22:32.812 1+0 records out 00:22:32.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269227 s, 15.2 MB/s 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.812 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:33.071 /dev/nbd1 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:33.071 1+0 records in 00:22:33.071 1+0 records out 00:22:33.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540643 s, 7.6 MB/s 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:33.071 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:33.330 23:03:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:33.330 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:33.330 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:33.330 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:33.330 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:33.330 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:33.330 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:33.590 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:33.849 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82256 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82256 ']' 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82256 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82256 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.850 killing process with pid 82256 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82256' 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82256 00:22:33.850 Received shutdown signal, test time was about 60.000000 seconds 00:22:33.850 00:22:33.850 Latency(us) 00:22:33.850 [2024-12-09T23:03:49.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.850 [2024-12-09T23:03:49.706Z] =================================================================================================================== 00:22:33.850 [2024-12-09T23:03:49.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.850 23:03:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82256 00:22:33.850 [2024-12-09 23:03:49.572784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:34.447 [2024-12-09 23:03:50.055167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:35.841 00:22:35.841 real 0m16.158s 00:22:35.841 user 0m19.792s 00:22:35.841 sys 0m2.282s 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.841 ************************************ 00:22:35.841 END TEST raid5f_rebuild_test 00:22:35.841 ************************************ 00:22:35.841 23:03:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:35.841 23:03:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:35.841 23:03:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.841 23:03:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.841 ************************************ 00:22:35.841 START TEST raid5f_rebuild_test_sb 00:22:35.841 ************************************ 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:35.841 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82696 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82696 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82696 ']' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.842 23:03:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:35.842 Zero copy mechanism will not be used. 00:22:35.842 [2024-12-09 23:03:51.605217] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:22:35.842 [2024-12-09 23:03:51.605357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82696 ] 00:22:36.100 [2024-12-09 23:03:51.784409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.100 [2024-12-09 23:03:51.941729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.668 [2024-12-09 23:03:52.219313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.668 [2024-12-09 23:03:52.219406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.926 BaseBdev1_malloc 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.926 [2024-12-09 23:03:52.657529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:36.926 [2024-12-09 23:03:52.657613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.926 [2024-12-09 23:03:52.657642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:36.926 [2024-12-09 23:03:52.657656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.926 [2024-12-09 23:03:52.660109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.926 [2024-12-09 23:03:52.660161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:36.926 BaseBdev1 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.926 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.927 BaseBdev2_malloc 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.927 [2024-12-09 23:03:52.714470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:36.927 [2024-12-09 23:03:52.714564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.927 [2024-12-09 23:03:52.714590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:36.927 [2024-12-09 23:03:52.714606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.927 [2024-12-09 23:03:52.717091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.927 [2024-12-09 23:03:52.717142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:36.927 BaseBdev2 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.927 BaseBdev3_malloc 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.927 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 [2024-12-09 23:03:52.788312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:37.186 [2024-12-09 23:03:52.788390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.186 [2024-12-09 23:03:52.788419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:37.186 [2024-12-09 23:03:52.788432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.186 [2024-12-09 23:03:52.790809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.186 [2024-12-09 23:03:52.790854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:37.186 BaseBdev3 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 spare_malloc 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 spare_delay 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 [2024-12-09 23:03:52.860710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:37.186 [2024-12-09 23:03:52.860809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.186 [2024-12-09 23:03:52.860839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:37.186 [2024-12-09 23:03:52.860853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.186 [2024-12-09 23:03:52.863408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.186 [2024-12-09 23:03:52.863454] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:37.186 spare 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 [2024-12-09 23:03:52.872766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.186 [2024-12-09 23:03:52.874905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:37.186 [2024-12-09 23:03:52.874994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:37.186 [2024-12-09 23:03:52.875251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:37.186 [2024-12-09 23:03:52.875276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:37.186 [2024-12-09 23:03:52.875604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:37.186 [2024-12-09 23:03:52.882254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:37.186 [2024-12-09 23:03:52.882295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:37.186 [2024-12-09 23:03:52.882574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.186 "name": "raid_bdev1", 00:22:37.186 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:37.186 "strip_size_kb": 64, 00:22:37.186 "state": "online", 00:22:37.186 "raid_level": "raid5f", 00:22:37.186 "superblock": true, 00:22:37.186 "num_base_bdevs": 3, 00:22:37.186 "num_base_bdevs_discovered": 3, 00:22:37.186 "num_base_bdevs_operational": 3, 00:22:37.186 "base_bdevs_list": [ 00:22:37.186 { 00:22:37.186 "name": "BaseBdev1", 00:22:37.186 "uuid": "ed02423d-503b-54cb-b5fa-56f5b1594a5a", 00:22:37.186 "is_configured": true, 00:22:37.186 "data_offset": 2048, 00:22:37.186 "data_size": 63488 00:22:37.186 }, 00:22:37.186 { 00:22:37.186 "name": "BaseBdev2", 00:22:37.186 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:37.186 "is_configured": true, 00:22:37.186 "data_offset": 2048, 00:22:37.186 "data_size": 63488 00:22:37.186 }, 00:22:37.186 { 00:22:37.186 "name": "BaseBdev3", 00:22:37.186 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:37.186 "is_configured": true, 00:22:37.186 "data_offset": 2048, 00:22:37.186 "data_size": 63488 00:22:37.186 } 00:22:37.186 ] 00:22:37.186 }' 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.186 23:03:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:37.754 [2024-12-09 23:03:53.357897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.754 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:38.012 [2024-12-09 23:03:53.645273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:38.012 /dev/nbd0 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:38.012 1+0 records in 00:22:38.012 1+0 records out 00:22:38.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035588 s, 11.5 MB/s 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.012 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:38.013 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:38.013 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:38.013 23:03:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:38.584 496+0 records in 00:22:38.584 496+0 records out 00:22:38.584 65011712 bytes (65 MB, 62 MiB) copied, 0.431637 s, 151 MB/s 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:38.584 [2024-12-09 23:03:54.407356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.584 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.843 [2024-12-09 23:03:54.447269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.843 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.843 "name": "raid_bdev1", 00:22:38.843 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:38.843 "strip_size_kb": 64, 00:22:38.843 "state": "online", 00:22:38.843 "raid_level": "raid5f", 00:22:38.843 "superblock": true, 00:22:38.843 "num_base_bdevs": 3, 00:22:38.843 "num_base_bdevs_discovered": 2, 00:22:38.844 "num_base_bdevs_operational": 2, 00:22:38.844 "base_bdevs_list": [ 00:22:38.844 { 00:22:38.844 "name": null, 00:22:38.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.844 "is_configured": false, 00:22:38.844 "data_offset": 0, 00:22:38.844 "data_size": 63488 00:22:38.844 }, 00:22:38.844 { 00:22:38.844 "name": "BaseBdev2", 00:22:38.844 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:38.844 "is_configured": true, 00:22:38.844 "data_offset": 2048, 00:22:38.844 "data_size": 63488 00:22:38.844 }, 00:22:38.844 { 00:22:38.844 "name": "BaseBdev3", 00:22:38.844 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:38.844 "is_configured": true, 00:22:38.844 "data_offset": 2048, 00:22:38.844 "data_size": 63488 00:22:38.844 } 00:22:38.844 ] 00:22:38.844 }' 00:22:38.844 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.844 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.102 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:39.102 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.102 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.102 [2024-12-09 23:03:54.938593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.362 [2024-12-09 23:03:54.958855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:22:39.362 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.362 23:03:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:39.362 [2024-12-09 23:03:54.968258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.302 23:03:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:40.302 "name": "raid_bdev1", 00:22:40.302 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:40.302 "strip_size_kb": 64, 00:22:40.302 "state": "online", 00:22:40.302 "raid_level": "raid5f", 00:22:40.302 "superblock": true, 00:22:40.302 "num_base_bdevs": 3, 00:22:40.302 "num_base_bdevs_discovered": 3, 00:22:40.302 "num_base_bdevs_operational": 3, 00:22:40.302 "process": { 00:22:40.302 "type": "rebuild", 00:22:40.302 "target": "spare", 00:22:40.302 "progress": { 00:22:40.302 "blocks": 18432, 00:22:40.302 "percent": 14 00:22:40.302 } 00:22:40.302 }, 00:22:40.302 "base_bdevs_list": [ 00:22:40.302 { 00:22:40.302 "name": "spare", 00:22:40.302 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:40.302 "is_configured": true, 00:22:40.302 "data_offset": 2048, 00:22:40.302 "data_size": 63488 00:22:40.302 }, 00:22:40.302 { 00:22:40.302 "name": "BaseBdev2", 00:22:40.302 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:40.302 "is_configured": true, 00:22:40.302 "data_offset": 2048, 00:22:40.302 "data_size": 63488 00:22:40.302 }, 00:22:40.302 { 00:22:40.302 "name": "BaseBdev3", 00:22:40.302 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:40.302 "is_configured": true, 00:22:40.302 "data_offset": 2048, 00:22:40.302 "data_size": 63488 00:22:40.302 } 00:22:40.302 ] 00:22:40.302 }' 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.302 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.302 [2024-12-09 23:03:56.104216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.562 [2024-12-09 23:03:56.181671] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:40.562 [2024-12-09 23:03:56.181771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.562 [2024-12-09 23:03:56.181796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.562 [2024-12-09 23:03:56.181807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.562 "name": "raid_bdev1", 00:22:40.562 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:40.562 "strip_size_kb": 64, 00:22:40.562 "state": "online", 00:22:40.562 "raid_level": "raid5f", 00:22:40.562 "superblock": true, 00:22:40.562 "num_base_bdevs": 3, 00:22:40.562 "num_base_bdevs_discovered": 2, 00:22:40.562 "num_base_bdevs_operational": 2, 00:22:40.562 "base_bdevs_list": [ 00:22:40.562 { 00:22:40.562 "name": null, 00:22:40.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.562 "is_configured": false, 00:22:40.562 "data_offset": 0, 00:22:40.562 "data_size": 63488 00:22:40.562 }, 00:22:40.562 { 00:22:40.562 "name": "BaseBdev2", 00:22:40.562 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:40.562 "is_configured": true, 00:22:40.562 "data_offset": 2048, 00:22:40.562 "data_size": 63488 00:22:40.562 }, 00:22:40.562 { 00:22:40.562 "name": "BaseBdev3", 00:22:40.562 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:40.562 "is_configured": true, 00:22:40.562 "data_offset": 2048, 00:22:40.562 "data_size": 63488 00:22:40.562 } 00:22:40.562 ] 00:22:40.562 }' 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.562 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:41.136 "name": "raid_bdev1", 00:22:41.136 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:41.136 "strip_size_kb": 64, 00:22:41.136 "state": "online", 00:22:41.136 "raid_level": "raid5f", 00:22:41.136 "superblock": true, 00:22:41.136 "num_base_bdevs": 3, 00:22:41.136 "num_base_bdevs_discovered": 2, 00:22:41.136 "num_base_bdevs_operational": 2, 00:22:41.136 "base_bdevs_list": [ 00:22:41.136 { 00:22:41.136 "name": null, 00:22:41.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.136 "is_configured": false, 00:22:41.136 "data_offset": 0, 00:22:41.136 "data_size": 63488 00:22:41.136 }, 00:22:41.136 { 00:22:41.136 "name": "BaseBdev2", 00:22:41.136 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:41.136 "is_configured": true, 00:22:41.136 "data_offset": 2048, 00:22:41.136 "data_size": 63488 00:22:41.136 }, 00:22:41.136 { 00:22:41.136 "name": "BaseBdev3", 00:22:41.136 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:41.136 "is_configured": true, 00:22:41.136 "data_offset": 2048, 00:22:41.136 "data_size": 63488 00:22:41.136 } 00:22:41.136 ] 00:22:41.136 }' 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.136 [2024-12-09 23:03:56.861442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:41.136 [2024-12-09 23:03:56.882331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.136 23:03:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:41.136 [2024-12-09 23:03:56.891871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.085 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.344 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:42.344 "name": "raid_bdev1", 00:22:42.344 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:42.344 "strip_size_kb": 64, 00:22:42.344 "state": "online", 00:22:42.344 "raid_level": "raid5f", 00:22:42.344 "superblock": true, 00:22:42.344 "num_base_bdevs": 3, 00:22:42.344 "num_base_bdevs_discovered": 3, 00:22:42.344 "num_base_bdevs_operational": 3, 00:22:42.344 "process": { 00:22:42.344 "type": "rebuild", 00:22:42.344 "target": "spare", 00:22:42.344 "progress": { 00:22:42.344 "blocks": 18432, 00:22:42.344 "percent": 14 00:22:42.344 } 00:22:42.344 }, 00:22:42.344 "base_bdevs_list": [ 00:22:42.344 { 00:22:42.344 "name": "spare", 00:22:42.344 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:42.344 "is_configured": true, 00:22:42.344 "data_offset": 2048, 00:22:42.344 "data_size": 63488 00:22:42.344 }, 00:22:42.344 { 00:22:42.344 "name": "BaseBdev2", 00:22:42.344 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:42.344 "is_configured": true, 00:22:42.344 "data_offset": 2048, 00:22:42.344 "data_size": 63488 00:22:42.344 }, 00:22:42.344 { 00:22:42.344 "name": "BaseBdev3", 00:22:42.344 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:42.344 "is_configured": true, 00:22:42.344 "data_offset": 2048, 00:22:42.344 "data_size": 63488 00:22:42.344 } 00:22:42.344 ] 00:22:42.344 }' 00:22:42.344 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:42.344 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:42.344 23:03:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:42.344 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=596 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.344 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:42.344 "name": "raid_bdev1", 00:22:42.345 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:42.345 "strip_size_kb": 64, 00:22:42.345 "state": "online", 00:22:42.345 "raid_level": "raid5f", 00:22:42.345 "superblock": true, 00:22:42.345 "num_base_bdevs": 3, 00:22:42.345 "num_base_bdevs_discovered": 3, 00:22:42.345 "num_base_bdevs_operational": 3, 00:22:42.345 "process": { 00:22:42.345 "type": "rebuild", 00:22:42.345 "target": "spare", 00:22:42.345 "progress": { 00:22:42.345 "blocks": 22528, 00:22:42.345 "percent": 17 00:22:42.345 } 00:22:42.345 }, 00:22:42.345 "base_bdevs_list": [ 00:22:42.345 { 00:22:42.345 "name": "spare", 00:22:42.345 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:42.345 "is_configured": true, 00:22:42.345 "data_offset": 2048, 00:22:42.345 "data_size": 63488 00:22:42.345 }, 00:22:42.345 { 00:22:42.345 "name": "BaseBdev2", 00:22:42.345 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:42.345 "is_configured": true, 00:22:42.345 "data_offset": 2048, 00:22:42.345 "data_size": 63488 00:22:42.345 }, 00:22:42.345 { 00:22:42.345 "name": "BaseBdev3", 00:22:42.345 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:42.345 "is_configured": true, 00:22:42.345 "data_offset": 2048, 00:22:42.345 "data_size": 63488 00:22:42.345 } 00:22:42.345 ] 00:22:42.345 }' 00:22:42.345 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:42.345 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:42.345 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:42.604 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:42.604 23:03:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:43.550 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:43.550 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.550 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.551 "name": "raid_bdev1", 00:22:43.551 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:43.551 "strip_size_kb": 64, 00:22:43.551 "state": "online", 00:22:43.551 "raid_level": "raid5f", 00:22:43.551 "superblock": true, 00:22:43.551 "num_base_bdevs": 3, 00:22:43.551 "num_base_bdevs_discovered": 3, 00:22:43.551 "num_base_bdevs_operational": 3, 00:22:43.551 "process": { 00:22:43.551 "type": "rebuild", 00:22:43.551 "target": "spare", 00:22:43.551 "progress": { 00:22:43.551 "blocks": 47104, 00:22:43.551 "percent": 37 00:22:43.551 } 00:22:43.551 }, 00:22:43.551 "base_bdevs_list": [ 00:22:43.551 { 00:22:43.551 "name": "spare", 00:22:43.551 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:43.551 "is_configured": true, 00:22:43.551 "data_offset": 2048, 00:22:43.551 "data_size": 63488 00:22:43.551 }, 00:22:43.551 { 00:22:43.551 "name": "BaseBdev2", 00:22:43.551 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:43.551 "is_configured": true, 00:22:43.551 "data_offset": 2048, 00:22:43.551 "data_size": 63488 00:22:43.551 }, 00:22:43.551 { 00:22:43.551 "name": "BaseBdev3", 00:22:43.551 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:43.551 "is_configured": true, 00:22:43.551 "data_offset": 2048, 00:22:43.551 "data_size": 63488 00:22:43.551 } 00:22:43.551 ] 00:22:43.551 }' 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.551 23:03:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:44.928 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:44.929 "name": "raid_bdev1", 00:22:44.929 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:44.929 "strip_size_kb": 64, 00:22:44.929 "state": "online", 00:22:44.929 "raid_level": "raid5f", 00:22:44.929 "superblock": true, 00:22:44.929 "num_base_bdevs": 3, 00:22:44.929 "num_base_bdevs_discovered": 3, 00:22:44.929 "num_base_bdevs_operational": 3, 00:22:44.929 "process": { 00:22:44.929 "type": "rebuild", 00:22:44.929 "target": "spare", 00:22:44.929 "progress": { 00:22:44.929 "blocks": 69632, 00:22:44.929 "percent": 54 00:22:44.929 } 00:22:44.929 }, 00:22:44.929 "base_bdevs_list": [ 00:22:44.929 { 00:22:44.929 "name": "spare", 00:22:44.929 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:44.929 "is_configured": true, 00:22:44.929 "data_offset": 2048, 00:22:44.929 "data_size": 63488 00:22:44.929 }, 00:22:44.929 { 00:22:44.929 "name": "BaseBdev2", 00:22:44.929 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:44.929 "is_configured": true, 00:22:44.929 "data_offset": 2048, 00:22:44.929 "data_size": 63488 00:22:44.929 }, 00:22:44.929 { 00:22:44.929 "name": "BaseBdev3", 00:22:44.929 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:44.929 "is_configured": true, 00:22:44.929 "data_offset": 2048, 00:22:44.929 "data_size": 63488 00:22:44.929 } 00:22:44.929 ] 00:22:44.929 }' 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.929 23:04:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.864 "name": "raid_bdev1", 00:22:45.864 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:45.864 "strip_size_kb": 64, 00:22:45.864 "state": "online", 00:22:45.864 "raid_level": "raid5f", 00:22:45.864 "superblock": true, 00:22:45.864 "num_base_bdevs": 3, 00:22:45.864 "num_base_bdevs_discovered": 3, 00:22:45.864 "num_base_bdevs_operational": 3, 00:22:45.864 "process": { 00:22:45.864 "type": "rebuild", 00:22:45.864 "target": "spare", 00:22:45.864 "progress": { 00:22:45.864 "blocks": 94208, 00:22:45.864 "percent": 74 00:22:45.864 } 00:22:45.864 }, 00:22:45.864 "base_bdevs_list": [ 00:22:45.864 { 00:22:45.864 "name": "spare", 00:22:45.864 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:45.864 "is_configured": true, 00:22:45.864 "data_offset": 2048, 00:22:45.864 "data_size": 63488 00:22:45.864 }, 00:22:45.864 { 00:22:45.864 "name": "BaseBdev2", 00:22:45.864 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:45.864 "is_configured": true, 00:22:45.864 "data_offset": 2048, 00:22:45.864 "data_size": 63488 00:22:45.864 }, 00:22:45.864 { 00:22:45.864 "name": "BaseBdev3", 00:22:45.864 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:45.864 "is_configured": true, 00:22:45.864 "data_offset": 2048, 00:22:45.864 "data_size": 63488 00:22:45.864 } 00:22:45.864 ] 00:22:45.864 }' 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.864 23:04:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:47.239 "name": "raid_bdev1", 00:22:47.239 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:47.239 "strip_size_kb": 64, 00:22:47.239 "state": "online", 00:22:47.239 "raid_level": "raid5f", 00:22:47.239 "superblock": true, 00:22:47.239 "num_base_bdevs": 3, 00:22:47.239 "num_base_bdevs_discovered": 3, 00:22:47.239 "num_base_bdevs_operational": 3, 00:22:47.239 "process": { 00:22:47.239 "type": "rebuild", 00:22:47.239 "target": "spare", 00:22:47.239 "progress": { 00:22:47.239 "blocks": 116736, 00:22:47.239 "percent": 91 00:22:47.239 } 00:22:47.239 }, 00:22:47.239 "base_bdevs_list": [ 00:22:47.239 { 00:22:47.239 "name": "spare", 00:22:47.239 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:47.239 "is_configured": true, 00:22:47.239 "data_offset": 2048, 00:22:47.239 "data_size": 63488 00:22:47.239 }, 00:22:47.239 { 00:22:47.239 "name": "BaseBdev2", 00:22:47.239 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:47.239 "is_configured": true, 00:22:47.239 "data_offset": 2048, 00:22:47.239 "data_size": 63488 00:22:47.239 }, 00:22:47.239 { 00:22:47.239 "name": "BaseBdev3", 00:22:47.239 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:47.239 "is_configured": true, 00:22:47.239 "data_offset": 2048, 00:22:47.239 "data_size": 63488 00:22:47.239 } 00:22:47.239 ] 00:22:47.239 }' 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.239 23:04:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:47.498 [2024-12-09 23:04:03.157189] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:47.498 [2024-12-09 23:04:03.157396] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:47.498 [2024-12-09 23:04:03.157597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.070 "name": "raid_bdev1", 00:22:48.070 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:48.070 "strip_size_kb": 64, 00:22:48.070 "state": "online", 00:22:48.070 "raid_level": "raid5f", 00:22:48.070 "superblock": true, 00:22:48.070 "num_base_bdevs": 3, 00:22:48.070 "num_base_bdevs_discovered": 3, 00:22:48.070 "num_base_bdevs_operational": 3, 00:22:48.070 "base_bdevs_list": [ 00:22:48.070 { 00:22:48.070 "name": "spare", 00:22:48.070 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:48.070 "is_configured": true, 00:22:48.070 "data_offset": 2048, 00:22:48.070 "data_size": 63488 00:22:48.070 }, 00:22:48.070 { 00:22:48.070 "name": "BaseBdev2", 00:22:48.070 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:48.070 "is_configured": true, 00:22:48.070 "data_offset": 2048, 00:22:48.070 "data_size": 63488 00:22:48.070 }, 00:22:48.070 { 00:22:48.070 "name": "BaseBdev3", 00:22:48.070 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:48.070 "is_configured": true, 00:22:48.070 "data_offset": 2048, 00:22:48.070 "data_size": 63488 00:22:48.070 } 00:22:48.070 ] 00:22:48.070 }' 00:22:48.070 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.329 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:48.329 23:04:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.329 "name": "raid_bdev1", 00:22:48.329 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:48.329 "strip_size_kb": 64, 00:22:48.329 "state": "online", 00:22:48.329 "raid_level": "raid5f", 00:22:48.329 "superblock": true, 00:22:48.329 "num_base_bdevs": 3, 00:22:48.329 "num_base_bdevs_discovered": 3, 00:22:48.329 "num_base_bdevs_operational": 3, 00:22:48.329 "base_bdevs_list": [ 00:22:48.329 { 00:22:48.329 "name": "spare", 00:22:48.329 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:48.329 "is_configured": true, 00:22:48.329 "data_offset": 2048, 00:22:48.329 "data_size": 63488 00:22:48.329 }, 00:22:48.329 { 00:22:48.329 "name": "BaseBdev2", 00:22:48.329 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:48.329 "is_configured": true, 00:22:48.329 "data_offset": 2048, 00:22:48.329 "data_size": 63488 00:22:48.329 }, 00:22:48.329 { 00:22:48.329 "name": "BaseBdev3", 00:22:48.329 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:48.329 "is_configured": true, 00:22:48.329 "data_offset": 2048, 00:22:48.329 "data_size": 63488 00:22:48.329 } 00:22:48.329 ] 00:22:48.329 }' 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.587 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.587 "name": "raid_bdev1", 00:22:48.587 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:48.587 "strip_size_kb": 64, 00:22:48.587 "state": "online", 00:22:48.587 "raid_level": "raid5f", 00:22:48.587 "superblock": true, 00:22:48.587 "num_base_bdevs": 3, 00:22:48.587 "num_base_bdevs_discovered": 3, 00:22:48.587 "num_base_bdevs_operational": 3, 00:22:48.587 "base_bdevs_list": [ 00:22:48.587 { 00:22:48.587 "name": "spare", 00:22:48.587 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:48.587 "is_configured": true, 00:22:48.587 "data_offset": 2048, 00:22:48.587 "data_size": 63488 00:22:48.587 }, 00:22:48.587 { 00:22:48.587 "name": "BaseBdev2", 00:22:48.587 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:48.587 "is_configured": true, 00:22:48.587 "data_offset": 2048, 00:22:48.587 "data_size": 63488 00:22:48.587 }, 00:22:48.587 { 00:22:48.587 "name": "BaseBdev3", 00:22:48.587 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:48.587 "is_configured": true, 00:22:48.587 "data_offset": 2048, 00:22:48.587 "data_size": 63488 00:22:48.587 } 00:22:48.587 ] 00:22:48.587 }' 00:22:48.587 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.587 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.846 [2024-12-09 23:04:04.632772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.846 [2024-12-09 23:04:04.632879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.846 [2024-12-09 23:04:04.633021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.846 [2024-12-09 23:04:04.633161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.846 [2024-12-09 23:04:04.633231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:48.846 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:49.105 /dev/nbd0 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:49.105 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.364 1+0 records in 00:22:49.364 1+0 records out 00:22:49.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265304 s, 15.4 MB/s 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:49.364 23:04:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:49.364 /dev/nbd1 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.622 1+0 records in 00:22:49.622 1+0 records out 00:22:49.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350992 s, 11.7 MB/s 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.622 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.880 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.139 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.397 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.397 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:50.397 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.397 23:04:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.397 [2024-12-09 23:04:06.003851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:50.397 [2024-12-09 23:04:06.003952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.397 [2024-12-09 23:04:06.003986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:50.397 [2024-12-09 23:04:06.004002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.397 [2024-12-09 23:04:06.007093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.397 [2024-12-09 23:04:06.007236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:50.397 [2024-12-09 23:04:06.007394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:50.398 [2024-12-09 23:04:06.007504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.398 [2024-12-09 23:04:06.007720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.398 [2024-12-09 23:04:06.007859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:50.398 spare 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.398 [2024-12-09 23:04:06.107865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:50.398 [2024-12-09 23:04:06.107944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:50.398 [2024-12-09 23:04:06.108382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:22:50.398 [2024-12-09 23:04:06.115752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:50.398 [2024-12-09 23:04:06.115799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:50.398 [2024-12-09 23:04:06.116125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.398 "name": "raid_bdev1", 00:22:50.398 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:50.398 "strip_size_kb": 64, 00:22:50.398 "state": "online", 00:22:50.398 "raid_level": "raid5f", 00:22:50.398 "superblock": true, 00:22:50.398 "num_base_bdevs": 3, 00:22:50.398 "num_base_bdevs_discovered": 3, 00:22:50.398 "num_base_bdevs_operational": 3, 00:22:50.398 "base_bdevs_list": [ 00:22:50.398 { 00:22:50.398 "name": "spare", 00:22:50.398 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:50.398 "is_configured": true, 00:22:50.398 "data_offset": 2048, 00:22:50.398 "data_size": 63488 00:22:50.398 }, 00:22:50.398 { 00:22:50.398 "name": "BaseBdev2", 00:22:50.398 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:50.398 "is_configured": true, 00:22:50.398 "data_offset": 2048, 00:22:50.398 "data_size": 63488 00:22:50.398 }, 00:22:50.398 { 00:22:50.398 "name": "BaseBdev3", 00:22:50.398 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:50.398 "is_configured": true, 00:22:50.398 "data_offset": 2048, 00:22:50.398 "data_size": 63488 00:22:50.398 } 00:22:50.398 ] 00:22:50.398 }' 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.398 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:50.966 "name": "raid_bdev1", 00:22:50.966 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:50.966 "strip_size_kb": 64, 00:22:50.966 "state": "online", 00:22:50.966 "raid_level": "raid5f", 00:22:50.966 "superblock": true, 00:22:50.966 "num_base_bdevs": 3, 00:22:50.966 "num_base_bdevs_discovered": 3, 00:22:50.966 "num_base_bdevs_operational": 3, 00:22:50.966 "base_bdevs_list": [ 00:22:50.966 { 00:22:50.966 "name": "spare", 00:22:50.966 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:50.966 "is_configured": true, 00:22:50.966 "data_offset": 2048, 00:22:50.966 "data_size": 63488 00:22:50.966 }, 00:22:50.966 { 00:22:50.966 "name": "BaseBdev2", 00:22:50.966 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:50.966 "is_configured": true, 00:22:50.966 "data_offset": 2048, 00:22:50.966 "data_size": 63488 00:22:50.966 }, 00:22:50.966 { 00:22:50.966 "name": "BaseBdev3", 00:22:50.966 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:50.966 "is_configured": true, 00:22:50.966 "data_offset": 2048, 00:22:50.966 "data_size": 63488 00:22:50.966 } 00:22:50.966 ] 00:22:50.966 }' 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.966 [2024-12-09 23:04:06.720713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.966 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.967 "name": "raid_bdev1", 00:22:50.967 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:50.967 "strip_size_kb": 64, 00:22:50.967 "state": "online", 00:22:50.967 "raid_level": "raid5f", 00:22:50.967 "superblock": true, 00:22:50.967 "num_base_bdevs": 3, 00:22:50.967 "num_base_bdevs_discovered": 2, 00:22:50.967 "num_base_bdevs_operational": 2, 00:22:50.967 "base_bdevs_list": [ 00:22:50.967 { 00:22:50.967 "name": null, 00:22:50.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.967 "is_configured": false, 00:22:50.967 "data_offset": 0, 00:22:50.967 "data_size": 63488 00:22:50.967 }, 00:22:50.967 { 00:22:50.967 "name": "BaseBdev2", 00:22:50.967 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:50.967 "is_configured": true, 00:22:50.967 "data_offset": 2048, 00:22:50.967 "data_size": 63488 00:22:50.967 }, 00:22:50.967 { 00:22:50.967 "name": "BaseBdev3", 00:22:50.967 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:50.967 "is_configured": true, 00:22:50.967 "data_offset": 2048, 00:22:50.967 "data_size": 63488 00:22:50.967 } 00:22:50.967 ] 00:22:50.967 }' 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.967 23:04:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.533 23:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:51.533 23:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.533 23:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.533 [2024-12-09 23:04:07.196202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:51.533 [2024-12-09 23:04:07.196560] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:51.533 [2024-12-09 23:04:07.196654] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:51.533 [2024-12-09 23:04:07.196724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:51.533 [2024-12-09 23:04:07.216789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:22:51.533 23:04:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.533 23:04:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:51.533 [2024-12-09 23:04:07.227235] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.467 "name": "raid_bdev1", 00:22:52.467 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:52.467 "strip_size_kb": 64, 00:22:52.467 "state": "online", 00:22:52.467 "raid_level": "raid5f", 00:22:52.467 "superblock": true, 00:22:52.467 "num_base_bdevs": 3, 00:22:52.467 "num_base_bdevs_discovered": 3, 00:22:52.467 "num_base_bdevs_operational": 3, 00:22:52.467 "process": { 00:22:52.467 "type": "rebuild", 00:22:52.467 "target": "spare", 00:22:52.467 "progress": { 00:22:52.467 "blocks": 18432, 00:22:52.467 "percent": 14 00:22:52.467 } 00:22:52.467 }, 00:22:52.467 "base_bdevs_list": [ 00:22:52.467 { 00:22:52.467 "name": "spare", 00:22:52.467 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:52.467 "is_configured": true, 00:22:52.467 "data_offset": 2048, 00:22:52.467 "data_size": 63488 00:22:52.467 }, 00:22:52.467 { 00:22:52.467 "name": "BaseBdev2", 00:22:52.467 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:52.467 "is_configured": true, 00:22:52.467 "data_offset": 2048, 00:22:52.467 "data_size": 63488 00:22:52.467 }, 00:22:52.467 { 00:22:52.467 "name": "BaseBdev3", 00:22:52.467 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:52.467 "is_configured": true, 00:22:52.467 "data_offset": 2048, 00:22:52.467 "data_size": 63488 00:22:52.467 } 00:22:52.467 ] 00:22:52.467 }' 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.467 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.725 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.725 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.725 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.726 [2024-12-09 23:04:08.379780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.726 [2024-12-09 23:04:08.441029] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:52.726 [2024-12-09 23:04:08.441153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.726 [2024-12-09 23:04:08.441177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.726 [2024-12-09 23:04:08.441190] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.726 "name": "raid_bdev1", 00:22:52.726 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:52.726 "strip_size_kb": 64, 00:22:52.726 "state": "online", 00:22:52.726 "raid_level": "raid5f", 00:22:52.726 "superblock": true, 00:22:52.726 "num_base_bdevs": 3, 00:22:52.726 "num_base_bdevs_discovered": 2, 00:22:52.726 "num_base_bdevs_operational": 2, 00:22:52.726 "base_bdevs_list": [ 00:22:52.726 { 00:22:52.726 "name": null, 00:22:52.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.726 "is_configured": false, 00:22:52.726 "data_offset": 0, 00:22:52.726 "data_size": 63488 00:22:52.726 }, 00:22:52.726 { 00:22:52.726 "name": "BaseBdev2", 00:22:52.726 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:52.726 "is_configured": true, 00:22:52.726 "data_offset": 2048, 00:22:52.726 "data_size": 63488 00:22:52.726 }, 00:22:52.726 { 00:22:52.726 "name": "BaseBdev3", 00:22:52.726 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:52.726 "is_configured": true, 00:22:52.726 "data_offset": 2048, 00:22:52.726 "data_size": 63488 00:22:52.726 } 00:22:52.726 ] 00:22:52.726 }' 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.726 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.292 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:53.292 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.292 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.292 [2024-12-09 23:04:08.943365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.292 [2024-12-09 23:04:08.943468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.292 [2024-12-09 23:04:08.943498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:53.292 [2024-12-09 23:04:08.943516] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.292 [2024-12-09 23:04:08.944150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.292 [2024-12-09 23:04:08.944193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.292 [2024-12-09 23:04:08.944332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:53.292 [2024-12-09 23:04:08.944357] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:53.292 [2024-12-09 23:04:08.944371] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:53.292 [2024-12-09 23:04:08.944405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.292 [2024-12-09 23:04:08.965124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:22:53.292 spare 00:22:53.292 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.292 23:04:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:53.292 [2024-12-09 23:04:08.975082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.234 23:04:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.234 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.234 "name": "raid_bdev1", 00:22:54.234 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:54.234 "strip_size_kb": 64, 00:22:54.234 "state": "online", 00:22:54.234 "raid_level": "raid5f", 00:22:54.234 "superblock": true, 00:22:54.234 "num_base_bdevs": 3, 00:22:54.234 "num_base_bdevs_discovered": 3, 00:22:54.234 "num_base_bdevs_operational": 3, 00:22:54.234 "process": { 00:22:54.234 "type": "rebuild", 00:22:54.234 "target": "spare", 00:22:54.234 "progress": { 00:22:54.234 "blocks": 18432, 00:22:54.234 "percent": 14 00:22:54.234 } 00:22:54.234 }, 00:22:54.234 "base_bdevs_list": [ 00:22:54.234 { 00:22:54.234 "name": "spare", 00:22:54.234 "uuid": "7ac0991c-b299-53cf-97d2-7eef79f7b2dc", 00:22:54.234 "is_configured": true, 00:22:54.234 "data_offset": 2048, 00:22:54.234 "data_size": 63488 00:22:54.234 }, 00:22:54.234 { 00:22:54.234 "name": "BaseBdev2", 00:22:54.234 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:54.234 "is_configured": true, 00:22:54.234 "data_offset": 2048, 00:22:54.234 "data_size": 63488 00:22:54.234 }, 00:22:54.234 { 00:22:54.234 "name": "BaseBdev3", 00:22:54.234 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:54.234 "is_configured": true, 00:22:54.234 "data_offset": 2048, 00:22:54.234 "data_size": 63488 00:22:54.234 } 00:22:54.234 ] 00:22:54.234 }' 00:22:54.234 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.234 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.234 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.493 [2024-12-09 23:04:10.114947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.493 [2024-12-09 23:04:10.188267] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.493 [2024-12-09 23:04:10.188500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.493 [2024-12-09 23:04:10.188552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.493 [2024-12-09 23:04:10.188564] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.493 "name": "raid_bdev1", 00:22:54.493 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:54.493 "strip_size_kb": 64, 00:22:54.493 "state": "online", 00:22:54.493 "raid_level": "raid5f", 00:22:54.493 "superblock": true, 00:22:54.493 "num_base_bdevs": 3, 00:22:54.493 "num_base_bdevs_discovered": 2, 00:22:54.493 "num_base_bdevs_operational": 2, 00:22:54.493 "base_bdevs_list": [ 00:22:54.493 { 00:22:54.493 "name": null, 00:22:54.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.493 "is_configured": false, 00:22:54.493 "data_offset": 0, 00:22:54.493 "data_size": 63488 00:22:54.493 }, 00:22:54.493 { 00:22:54.493 "name": "BaseBdev2", 00:22:54.493 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:54.493 "is_configured": true, 00:22:54.493 "data_offset": 2048, 00:22:54.493 "data_size": 63488 00:22:54.493 }, 00:22:54.493 { 00:22:54.493 "name": "BaseBdev3", 00:22:54.493 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:54.493 "is_configured": true, 00:22:54.493 "data_offset": 2048, 00:22:54.493 "data_size": 63488 00:22:54.493 } 00:22:54.493 ] 00:22:54.493 }' 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.493 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.063 "name": "raid_bdev1", 00:22:55.063 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:55.063 "strip_size_kb": 64, 00:22:55.063 "state": "online", 00:22:55.063 "raid_level": "raid5f", 00:22:55.063 "superblock": true, 00:22:55.063 "num_base_bdevs": 3, 00:22:55.063 "num_base_bdevs_discovered": 2, 00:22:55.063 "num_base_bdevs_operational": 2, 00:22:55.063 "base_bdevs_list": [ 00:22:55.063 { 00:22:55.063 "name": null, 00:22:55.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.063 "is_configured": false, 00:22:55.063 "data_offset": 0, 00:22:55.063 "data_size": 63488 00:22:55.063 }, 00:22:55.063 { 00:22:55.063 "name": "BaseBdev2", 00:22:55.063 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:55.063 "is_configured": true, 00:22:55.063 "data_offset": 2048, 00:22:55.063 "data_size": 63488 00:22:55.063 }, 00:22:55.063 { 00:22:55.063 "name": "BaseBdev3", 00:22:55.063 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:55.063 "is_configured": true, 00:22:55.063 "data_offset": 2048, 00:22:55.063 "data_size": 63488 00:22:55.063 } 00:22:55.063 ] 00:22:55.063 }' 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.063 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.063 [2024-12-09 23:04:10.806213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.063 [2024-12-09 23:04:10.806378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.063 [2024-12-09 23:04:10.806425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:55.063 [2024-12-09 23:04:10.806438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.063 [2024-12-09 23:04:10.807058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.063 [2024-12-09 23:04:10.807093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.063 [2024-12-09 23:04:10.807210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:55.063 [2024-12-09 23:04:10.807234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:55.064 [2024-12-09 23:04:10.807260] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:55.064 [2024-12-09 23:04:10.807274] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:55.064 BaseBdev1 00:22:55.064 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.064 23:04:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.002 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.261 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.261 "name": "raid_bdev1", 00:22:56.261 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:56.261 "strip_size_kb": 64, 00:22:56.261 "state": "online", 00:22:56.261 "raid_level": "raid5f", 00:22:56.261 "superblock": true, 00:22:56.261 "num_base_bdevs": 3, 00:22:56.261 "num_base_bdevs_discovered": 2, 00:22:56.261 "num_base_bdevs_operational": 2, 00:22:56.261 "base_bdevs_list": [ 00:22:56.261 { 00:22:56.261 "name": null, 00:22:56.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.261 "is_configured": false, 00:22:56.261 "data_offset": 0, 00:22:56.261 "data_size": 63488 00:22:56.261 }, 00:22:56.261 { 00:22:56.261 "name": "BaseBdev2", 00:22:56.261 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:56.261 "is_configured": true, 00:22:56.261 "data_offset": 2048, 00:22:56.261 "data_size": 63488 00:22:56.261 }, 00:22:56.261 { 00:22:56.261 "name": "BaseBdev3", 00:22:56.261 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:56.261 "is_configured": true, 00:22:56.261 "data_offset": 2048, 00:22:56.261 "data_size": 63488 00:22:56.261 } 00:22:56.261 ] 00:22:56.261 }' 00:22:56.261 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.261 23:04:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.520 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:56.520 "name": "raid_bdev1", 00:22:56.520 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:56.520 "strip_size_kb": 64, 00:22:56.520 "state": "online", 00:22:56.520 "raid_level": "raid5f", 00:22:56.520 "superblock": true, 00:22:56.520 "num_base_bdevs": 3, 00:22:56.520 "num_base_bdevs_discovered": 2, 00:22:56.520 "num_base_bdevs_operational": 2, 00:22:56.520 "base_bdevs_list": [ 00:22:56.520 { 00:22:56.520 "name": null, 00:22:56.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.520 "is_configured": false, 00:22:56.520 "data_offset": 0, 00:22:56.520 "data_size": 63488 00:22:56.520 }, 00:22:56.520 { 00:22:56.520 "name": "BaseBdev2", 00:22:56.521 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:56.521 "is_configured": true, 00:22:56.521 "data_offset": 2048, 00:22:56.521 "data_size": 63488 00:22:56.521 }, 00:22:56.521 { 00:22:56.521 "name": "BaseBdev3", 00:22:56.521 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:56.521 "is_configured": true, 00:22:56.521 "data_offset": 2048, 00:22:56.521 "data_size": 63488 00:22:56.521 } 00:22:56.521 ] 00:22:56.521 }' 00:22:56.521 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.521 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:56.521 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.779 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:56.779 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:56.779 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:22:56.779 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:56.779 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:56.779 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.780 [2024-12-09 23:04:12.408260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.780 [2024-12-09 23:04:12.408489] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:56.780 [2024-12-09 23:04:12.408523] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:56.780 request: 00:22:56.780 { 00:22:56.780 "base_bdev": "BaseBdev1", 00:22:56.780 "raid_bdev": "raid_bdev1", 00:22:56.780 "method": "bdev_raid_add_base_bdev", 00:22:56.780 "req_id": 1 00:22:56.780 } 00:22:56.780 Got JSON-RPC error response 00:22:56.780 response: 00:22:56.780 { 00:22:56.780 "code": -22, 00:22:56.780 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:56.780 } 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.780 23:04:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.718 "name": "raid_bdev1", 00:22:57.718 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:57.718 "strip_size_kb": 64, 00:22:57.718 "state": "online", 00:22:57.718 "raid_level": "raid5f", 00:22:57.718 "superblock": true, 00:22:57.718 "num_base_bdevs": 3, 00:22:57.718 "num_base_bdevs_discovered": 2, 00:22:57.718 "num_base_bdevs_operational": 2, 00:22:57.718 "base_bdevs_list": [ 00:22:57.718 { 00:22:57.718 "name": null, 00:22:57.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.718 "is_configured": false, 00:22:57.718 "data_offset": 0, 00:22:57.718 "data_size": 63488 00:22:57.718 }, 00:22:57.718 { 00:22:57.718 "name": "BaseBdev2", 00:22:57.718 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:57.718 "is_configured": true, 00:22:57.718 "data_offset": 2048, 00:22:57.718 "data_size": 63488 00:22:57.718 }, 00:22:57.718 { 00:22:57.718 "name": "BaseBdev3", 00:22:57.718 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:57.718 "is_configured": true, 00:22:57.718 "data_offset": 2048, 00:22:57.718 "data_size": 63488 00:22:57.718 } 00:22:57.718 ] 00:22:57.718 }' 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.718 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.286 "name": "raid_bdev1", 00:22:58.286 "uuid": "230c57e9-25c4-4242-a7dc-52c19be4705d", 00:22:58.286 "strip_size_kb": 64, 00:22:58.286 "state": "online", 00:22:58.286 "raid_level": "raid5f", 00:22:58.286 "superblock": true, 00:22:58.286 "num_base_bdevs": 3, 00:22:58.286 "num_base_bdevs_discovered": 2, 00:22:58.286 "num_base_bdevs_operational": 2, 00:22:58.286 "base_bdevs_list": [ 00:22:58.286 { 00:22:58.286 "name": null, 00:22:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.286 "is_configured": false, 00:22:58.286 "data_offset": 0, 00:22:58.286 "data_size": 63488 00:22:58.286 }, 00:22:58.286 { 00:22:58.286 "name": "BaseBdev2", 00:22:58.286 "uuid": "86695d9b-c458-5d89-ab09-938a4037b088", 00:22:58.286 "is_configured": true, 00:22:58.286 "data_offset": 2048, 00:22:58.286 "data_size": 63488 00:22:58.286 }, 00:22:58.286 { 00:22:58.286 "name": "BaseBdev3", 00:22:58.286 "uuid": "e7dbdd8f-0d2b-5a10-be44-a3a116be0167", 00:22:58.286 "is_configured": true, 00:22:58.286 "data_offset": 2048, 00:22:58.286 "data_size": 63488 00:22:58.286 } 00:22:58.286 ] 00:22:58.286 }' 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:58.286 23:04:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82696 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82696 ']' 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82696 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82696 00:22:58.286 killing process with pid 82696 00:22:58.286 Received shutdown signal, test time was about 60.000000 seconds 00:22:58.286 00:22:58.286 Latency(us) 00:22:58.286 [2024-12-09T23:04:14.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.286 [2024-12-09T23:04:14.142Z] =================================================================================================================== 00:22:58.286 [2024-12-09T23:04:14.142Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82696' 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82696 00:22:58.286 23:04:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82696 00:22:58.286 [2024-12-09 23:04:14.066336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:58.286 [2024-12-09 23:04:14.066504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.286 [2024-12-09 23:04:14.066594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.286 [2024-12-09 23:04:14.066610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:58.854 [2024-12-09 23:04:14.559638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:00.262 23:04:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:00.262 00:23:00.262 real 0m24.363s 00:23:00.262 user 0m31.335s 00:23:00.262 sys 0m2.773s 00:23:00.262 ************************************ 00:23:00.262 END TEST raid5f_rebuild_test_sb 00:23:00.262 ************************************ 00:23:00.262 23:04:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.262 23:04:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.262 23:04:15 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:23:00.262 23:04:15 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:00.262 23:04:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:00.262 23:04:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.262 23:04:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:00.262 ************************************ 00:23:00.262 START TEST raid5f_state_function_test 00:23:00.262 ************************************ 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83455 00:23:00.262 Process raid pid: 83455 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83455' 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83455 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83455 ']' 00:23:00.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.262 23:04:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.262 [2024-12-09 23:04:16.025070] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:23:00.262 [2024-12-09 23:04:16.025309] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.522 [2024-12-09 23:04:16.207281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.522 [2024-12-09 23:04:16.350797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.784 [2024-12-09 23:04:16.588631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.784 [2024-12-09 23:04:16.588686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.354 [2024-12-09 23:04:16.976381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.354 [2024-12-09 23:04:16.976596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.354 [2024-12-09 23:04:16.976619] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.354 [2024-12-09 23:04:16.976632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.354 [2024-12-09 23:04:16.976641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:01.354 [2024-12-09 23:04:16.976652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:01.354 [2024-12-09 23:04:16.976660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:01.354 [2024-12-09 23:04:16.976670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.354 23:04:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.354 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.354 "name": "Existed_Raid", 00:23:01.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.354 "strip_size_kb": 64, 00:23:01.354 "state": "configuring", 00:23:01.354 "raid_level": "raid5f", 00:23:01.354 "superblock": false, 00:23:01.354 "num_base_bdevs": 4, 00:23:01.354 "num_base_bdevs_discovered": 0, 00:23:01.354 "num_base_bdevs_operational": 4, 00:23:01.354 "base_bdevs_list": [ 00:23:01.354 { 00:23:01.354 "name": "BaseBdev1", 00:23:01.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.354 "is_configured": false, 00:23:01.354 "data_offset": 0, 00:23:01.354 "data_size": 0 00:23:01.354 }, 00:23:01.354 { 00:23:01.354 "name": "BaseBdev2", 00:23:01.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.354 "is_configured": false, 00:23:01.354 "data_offset": 0, 00:23:01.354 "data_size": 0 00:23:01.354 }, 00:23:01.354 { 00:23:01.354 "name": "BaseBdev3", 00:23:01.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.354 "is_configured": false, 00:23:01.354 "data_offset": 0, 00:23:01.354 "data_size": 0 00:23:01.354 }, 00:23:01.354 { 00:23:01.354 "name": "BaseBdev4", 00:23:01.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.354 "is_configured": false, 00:23:01.354 "data_offset": 0, 00:23:01.354 "data_size": 0 00:23:01.354 } 00:23:01.354 ] 00:23:01.354 }' 00:23:01.354 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.354 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.613 [2024-12-09 23:04:17.459484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:01.613 [2024-12-09 23:04:17.459612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.613 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.873 [2024-12-09 23:04:17.471491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.873 [2024-12-09 23:04:17.471639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.873 [2024-12-09 23:04:17.471673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.873 [2024-12-09 23:04:17.471702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.873 [2024-12-09 23:04:17.471724] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:01.873 [2024-12-09 23:04:17.471749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:01.873 [2024-12-09 23:04:17.471777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:01.873 [2024-12-09 23:04:17.471805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.873 BaseBdev1 00:23:01.873 [2024-12-09 23:04:17.525819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.873 [ 00:23:01.873 { 00:23:01.873 "name": "BaseBdev1", 00:23:01.873 "aliases": [ 00:23:01.873 "35422d8c-9ce7-4af0-8e76-91af921cf1cd" 00:23:01.873 ], 00:23:01.873 "product_name": "Malloc disk", 00:23:01.873 "block_size": 512, 00:23:01.873 "num_blocks": 65536, 00:23:01.873 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:01.873 "assigned_rate_limits": { 00:23:01.873 "rw_ios_per_sec": 0, 00:23:01.873 "rw_mbytes_per_sec": 0, 00:23:01.873 "r_mbytes_per_sec": 0, 00:23:01.873 "w_mbytes_per_sec": 0 00:23:01.873 }, 00:23:01.873 "claimed": true, 00:23:01.873 "claim_type": "exclusive_write", 00:23:01.873 "zoned": false, 00:23:01.873 "supported_io_types": { 00:23:01.873 "read": true, 00:23:01.873 "write": true, 00:23:01.873 "unmap": true, 00:23:01.873 "flush": true, 00:23:01.873 "reset": true, 00:23:01.873 "nvme_admin": false, 00:23:01.873 "nvme_io": false, 00:23:01.873 "nvme_io_md": false, 00:23:01.873 "write_zeroes": true, 00:23:01.873 "zcopy": true, 00:23:01.873 "get_zone_info": false, 00:23:01.873 "zone_management": false, 00:23:01.873 "zone_append": false, 00:23:01.873 "compare": false, 00:23:01.873 "compare_and_write": false, 00:23:01.873 "abort": true, 00:23:01.873 "seek_hole": false, 00:23:01.873 "seek_data": false, 00:23:01.873 "copy": true, 00:23:01.873 "nvme_iov_md": false 00:23:01.873 }, 00:23:01.873 "memory_domains": [ 00:23:01.873 { 00:23:01.873 "dma_device_id": "system", 00:23:01.873 "dma_device_type": 1 00:23:01.873 }, 00:23:01.873 { 00:23:01.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.873 "dma_device_type": 2 00:23:01.873 } 00:23:01.873 ], 00:23:01.873 "driver_specific": {} 00:23:01.873 } 00:23:01.873 ] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.873 "name": "Existed_Raid", 00:23:01.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.873 "strip_size_kb": 64, 00:23:01.873 "state": "configuring", 00:23:01.873 "raid_level": "raid5f", 00:23:01.873 "superblock": false, 00:23:01.873 "num_base_bdevs": 4, 00:23:01.873 "num_base_bdevs_discovered": 1, 00:23:01.873 "num_base_bdevs_operational": 4, 00:23:01.873 "base_bdevs_list": [ 00:23:01.873 { 00:23:01.873 "name": "BaseBdev1", 00:23:01.873 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:01.873 "is_configured": true, 00:23:01.873 "data_offset": 0, 00:23:01.873 "data_size": 65536 00:23:01.873 }, 00:23:01.873 { 00:23:01.873 "name": "BaseBdev2", 00:23:01.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.873 "is_configured": false, 00:23:01.873 "data_offset": 0, 00:23:01.873 "data_size": 0 00:23:01.873 }, 00:23:01.873 { 00:23:01.873 "name": "BaseBdev3", 00:23:01.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.873 "is_configured": false, 00:23:01.873 "data_offset": 0, 00:23:01.873 "data_size": 0 00:23:01.873 }, 00:23:01.873 { 00:23:01.873 "name": "BaseBdev4", 00:23:01.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.873 "is_configured": false, 00:23:01.873 "data_offset": 0, 00:23:01.873 "data_size": 0 00:23:01.873 } 00:23:01.873 ] 00:23:01.873 }' 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.873 23:04:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.442 [2024-12-09 23:04:18.017126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.442 [2024-12-09 23:04:18.017254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.442 [2024-12-09 23:04:18.029205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.442 [2024-12-09 23:04:18.031578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.442 [2024-12-09 23:04:18.031704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.442 [2024-12-09 23:04:18.031746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:02.442 [2024-12-09 23:04:18.031780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:02.442 [2024-12-09 23:04:18.031819] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:02.442 [2024-12-09 23:04:18.031847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.442 "name": "Existed_Raid", 00:23:02.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.442 "strip_size_kb": 64, 00:23:02.442 "state": "configuring", 00:23:02.442 "raid_level": "raid5f", 00:23:02.442 "superblock": false, 00:23:02.442 "num_base_bdevs": 4, 00:23:02.442 "num_base_bdevs_discovered": 1, 00:23:02.442 "num_base_bdevs_operational": 4, 00:23:02.442 "base_bdevs_list": [ 00:23:02.442 { 00:23:02.442 "name": "BaseBdev1", 00:23:02.442 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:02.442 "is_configured": true, 00:23:02.442 "data_offset": 0, 00:23:02.442 "data_size": 65536 00:23:02.442 }, 00:23:02.442 { 00:23:02.442 "name": "BaseBdev2", 00:23:02.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.442 "is_configured": false, 00:23:02.442 "data_offset": 0, 00:23:02.442 "data_size": 0 00:23:02.442 }, 00:23:02.442 { 00:23:02.442 "name": "BaseBdev3", 00:23:02.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.442 "is_configured": false, 00:23:02.442 "data_offset": 0, 00:23:02.442 "data_size": 0 00:23:02.442 }, 00:23:02.442 { 00:23:02.442 "name": "BaseBdev4", 00:23:02.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.442 "is_configured": false, 00:23:02.442 "data_offset": 0, 00:23:02.442 "data_size": 0 00:23:02.442 } 00:23:02.442 ] 00:23:02.442 }' 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.442 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.702 [2024-12-09 23:04:18.521117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.702 BaseBdev2 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:02.702 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.703 [ 00:23:02.703 { 00:23:02.703 "name": "BaseBdev2", 00:23:02.703 "aliases": [ 00:23:02.703 "a1844d37-7623-4b78-90c9-7c134802de3d" 00:23:02.703 ], 00:23:02.703 "product_name": "Malloc disk", 00:23:02.703 "block_size": 512, 00:23:02.703 "num_blocks": 65536, 00:23:02.703 "uuid": "a1844d37-7623-4b78-90c9-7c134802de3d", 00:23:02.703 "assigned_rate_limits": { 00:23:02.703 "rw_ios_per_sec": 0, 00:23:02.703 "rw_mbytes_per_sec": 0, 00:23:02.703 "r_mbytes_per_sec": 0, 00:23:02.703 "w_mbytes_per_sec": 0 00:23:02.703 }, 00:23:02.703 "claimed": true, 00:23:02.703 "claim_type": "exclusive_write", 00:23:02.703 "zoned": false, 00:23:02.703 "supported_io_types": { 00:23:02.703 "read": true, 00:23:02.703 "write": true, 00:23:02.703 "unmap": true, 00:23:02.703 "flush": true, 00:23:02.703 "reset": true, 00:23:02.703 "nvme_admin": false, 00:23:02.703 "nvme_io": false, 00:23:02.703 "nvme_io_md": false, 00:23:02.703 "write_zeroes": true, 00:23:02.703 "zcopy": true, 00:23:02.703 "get_zone_info": false, 00:23:02.703 "zone_management": false, 00:23:02.703 "zone_append": false, 00:23:02.703 "compare": false, 00:23:02.703 "compare_and_write": false, 00:23:02.703 "abort": true, 00:23:02.703 "seek_hole": false, 00:23:02.703 "seek_data": false, 00:23:02.703 "copy": true, 00:23:02.703 "nvme_iov_md": false 00:23:02.703 }, 00:23:02.703 "memory_domains": [ 00:23:02.703 { 00:23:02.703 "dma_device_id": "system", 00:23:02.703 "dma_device_type": 1 00:23:02.703 }, 00:23:02.703 { 00:23:02.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.703 "dma_device_type": 2 00:23:02.703 } 00:23:02.703 ], 00:23:02.703 "driver_specific": {} 00:23:02.703 } 00:23:02.703 ] 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.703 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.962 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.962 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.962 "name": "Existed_Raid", 00:23:02.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.962 "strip_size_kb": 64, 00:23:02.962 "state": "configuring", 00:23:02.962 "raid_level": "raid5f", 00:23:02.962 "superblock": false, 00:23:02.962 "num_base_bdevs": 4, 00:23:02.962 "num_base_bdevs_discovered": 2, 00:23:02.962 "num_base_bdevs_operational": 4, 00:23:02.962 "base_bdevs_list": [ 00:23:02.962 { 00:23:02.962 "name": "BaseBdev1", 00:23:02.962 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:02.962 "is_configured": true, 00:23:02.962 "data_offset": 0, 00:23:02.962 "data_size": 65536 00:23:02.962 }, 00:23:02.962 { 00:23:02.962 "name": "BaseBdev2", 00:23:02.962 "uuid": "a1844d37-7623-4b78-90c9-7c134802de3d", 00:23:02.962 "is_configured": true, 00:23:02.962 "data_offset": 0, 00:23:02.962 "data_size": 65536 00:23:02.962 }, 00:23:02.962 { 00:23:02.962 "name": "BaseBdev3", 00:23:02.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.962 "is_configured": false, 00:23:02.962 "data_offset": 0, 00:23:02.962 "data_size": 0 00:23:02.962 }, 00:23:02.962 { 00:23:02.962 "name": "BaseBdev4", 00:23:02.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.962 "is_configured": false, 00:23:02.962 "data_offset": 0, 00:23:02.962 "data_size": 0 00:23:02.962 } 00:23:02.962 ] 00:23:02.962 }' 00:23:02.962 23:04:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.962 23:04:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.220 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:03.220 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.220 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.492 [2024-12-09 23:04:19.085768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:03.492 BaseBdev3 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.492 [ 00:23:03.492 { 00:23:03.492 "name": "BaseBdev3", 00:23:03.492 "aliases": [ 00:23:03.492 "ad06e0ed-1d95-4989-9c0b-5609f9ac0d22" 00:23:03.492 ], 00:23:03.492 "product_name": "Malloc disk", 00:23:03.492 "block_size": 512, 00:23:03.492 "num_blocks": 65536, 00:23:03.492 "uuid": "ad06e0ed-1d95-4989-9c0b-5609f9ac0d22", 00:23:03.492 "assigned_rate_limits": { 00:23:03.492 "rw_ios_per_sec": 0, 00:23:03.492 "rw_mbytes_per_sec": 0, 00:23:03.492 "r_mbytes_per_sec": 0, 00:23:03.492 "w_mbytes_per_sec": 0 00:23:03.492 }, 00:23:03.492 "claimed": true, 00:23:03.492 "claim_type": "exclusive_write", 00:23:03.492 "zoned": false, 00:23:03.492 "supported_io_types": { 00:23:03.492 "read": true, 00:23:03.492 "write": true, 00:23:03.492 "unmap": true, 00:23:03.492 "flush": true, 00:23:03.492 "reset": true, 00:23:03.492 "nvme_admin": false, 00:23:03.492 "nvme_io": false, 00:23:03.492 "nvme_io_md": false, 00:23:03.492 "write_zeroes": true, 00:23:03.492 "zcopy": true, 00:23:03.492 "get_zone_info": false, 00:23:03.492 "zone_management": false, 00:23:03.492 "zone_append": false, 00:23:03.492 "compare": false, 00:23:03.492 "compare_and_write": false, 00:23:03.492 "abort": true, 00:23:03.492 "seek_hole": false, 00:23:03.492 "seek_data": false, 00:23:03.492 "copy": true, 00:23:03.492 "nvme_iov_md": false 00:23:03.492 }, 00:23:03.492 "memory_domains": [ 00:23:03.492 { 00:23:03.492 "dma_device_id": "system", 00:23:03.492 "dma_device_type": 1 00:23:03.492 }, 00:23:03.492 { 00:23:03.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.492 "dma_device_type": 2 00:23:03.492 } 00:23:03.492 ], 00:23:03.492 "driver_specific": {} 00:23:03.492 } 00:23:03.492 ] 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:03.492 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.493 "name": "Existed_Raid", 00:23:03.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.493 "strip_size_kb": 64, 00:23:03.493 "state": "configuring", 00:23:03.493 "raid_level": "raid5f", 00:23:03.493 "superblock": false, 00:23:03.493 "num_base_bdevs": 4, 00:23:03.493 "num_base_bdevs_discovered": 3, 00:23:03.493 "num_base_bdevs_operational": 4, 00:23:03.493 "base_bdevs_list": [ 00:23:03.493 { 00:23:03.493 "name": "BaseBdev1", 00:23:03.493 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:03.493 "is_configured": true, 00:23:03.493 "data_offset": 0, 00:23:03.493 "data_size": 65536 00:23:03.493 }, 00:23:03.493 { 00:23:03.493 "name": "BaseBdev2", 00:23:03.493 "uuid": "a1844d37-7623-4b78-90c9-7c134802de3d", 00:23:03.493 "is_configured": true, 00:23:03.493 "data_offset": 0, 00:23:03.493 "data_size": 65536 00:23:03.493 }, 00:23:03.493 { 00:23:03.493 "name": "BaseBdev3", 00:23:03.493 "uuid": "ad06e0ed-1d95-4989-9c0b-5609f9ac0d22", 00:23:03.493 "is_configured": true, 00:23:03.493 "data_offset": 0, 00:23:03.493 "data_size": 65536 00:23:03.493 }, 00:23:03.493 { 00:23:03.493 "name": "BaseBdev4", 00:23:03.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.493 "is_configured": false, 00:23:03.493 "data_offset": 0, 00:23:03.493 "data_size": 0 00:23:03.493 } 00:23:03.493 ] 00:23:03.493 }' 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.493 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.754 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:03.754 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.754 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 [2024-12-09 23:04:19.642159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:04.013 [2024-12-09 23:04:19.642244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:04.013 [2024-12-09 23:04:19.642256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:04.013 [2024-12-09 23:04:19.642626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:04.013 [2024-12-09 23:04:19.652183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:04.013 [2024-12-09 23:04:19.652246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:04.013 [2024-12-09 23:04:19.652656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.013 BaseBdev4 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 [ 00:23:04.013 { 00:23:04.013 "name": "BaseBdev4", 00:23:04.013 "aliases": [ 00:23:04.013 "aebfdafc-35d8-4e57-bf37-0ef38744f018" 00:23:04.013 ], 00:23:04.013 "product_name": "Malloc disk", 00:23:04.013 "block_size": 512, 00:23:04.013 "num_blocks": 65536, 00:23:04.013 "uuid": "aebfdafc-35d8-4e57-bf37-0ef38744f018", 00:23:04.013 "assigned_rate_limits": { 00:23:04.013 "rw_ios_per_sec": 0, 00:23:04.013 "rw_mbytes_per_sec": 0, 00:23:04.013 "r_mbytes_per_sec": 0, 00:23:04.013 "w_mbytes_per_sec": 0 00:23:04.013 }, 00:23:04.013 "claimed": true, 00:23:04.013 "claim_type": "exclusive_write", 00:23:04.013 "zoned": false, 00:23:04.013 "supported_io_types": { 00:23:04.013 "read": true, 00:23:04.013 "write": true, 00:23:04.013 "unmap": true, 00:23:04.013 "flush": true, 00:23:04.013 "reset": true, 00:23:04.013 "nvme_admin": false, 00:23:04.013 "nvme_io": false, 00:23:04.013 "nvme_io_md": false, 00:23:04.013 "write_zeroes": true, 00:23:04.013 "zcopy": true, 00:23:04.013 "get_zone_info": false, 00:23:04.013 "zone_management": false, 00:23:04.013 "zone_append": false, 00:23:04.013 "compare": false, 00:23:04.013 "compare_and_write": false, 00:23:04.013 "abort": true, 00:23:04.013 "seek_hole": false, 00:23:04.013 "seek_data": false, 00:23:04.013 "copy": true, 00:23:04.013 "nvme_iov_md": false 00:23:04.013 }, 00:23:04.013 "memory_domains": [ 00:23:04.013 { 00:23:04.013 "dma_device_id": "system", 00:23:04.013 "dma_device_type": 1 00:23:04.013 }, 00:23:04.013 { 00:23:04.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.013 "dma_device_type": 2 00:23:04.013 } 00:23:04.013 ], 00:23:04.013 "driver_specific": {} 00:23:04.013 } 00:23:04.013 ] 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.013 "name": "Existed_Raid", 00:23:04.013 "uuid": "8f6bb4e1-96fc-42c2-9363-18cd9fa5d03f", 00:23:04.013 "strip_size_kb": 64, 00:23:04.013 "state": "online", 00:23:04.013 "raid_level": "raid5f", 00:23:04.013 "superblock": false, 00:23:04.013 "num_base_bdevs": 4, 00:23:04.013 "num_base_bdevs_discovered": 4, 00:23:04.013 "num_base_bdevs_operational": 4, 00:23:04.013 "base_bdevs_list": [ 00:23:04.013 { 00:23:04.013 "name": "BaseBdev1", 00:23:04.013 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:04.013 "is_configured": true, 00:23:04.013 "data_offset": 0, 00:23:04.013 "data_size": 65536 00:23:04.013 }, 00:23:04.013 { 00:23:04.013 "name": "BaseBdev2", 00:23:04.013 "uuid": "a1844d37-7623-4b78-90c9-7c134802de3d", 00:23:04.013 "is_configured": true, 00:23:04.013 "data_offset": 0, 00:23:04.013 "data_size": 65536 00:23:04.013 }, 00:23:04.013 { 00:23:04.013 "name": "BaseBdev3", 00:23:04.013 "uuid": "ad06e0ed-1d95-4989-9c0b-5609f9ac0d22", 00:23:04.013 "is_configured": true, 00:23:04.013 "data_offset": 0, 00:23:04.013 "data_size": 65536 00:23:04.013 }, 00:23:04.013 { 00:23:04.013 "name": "BaseBdev4", 00:23:04.013 "uuid": "aebfdafc-35d8-4e57-bf37-0ef38744f018", 00:23:04.013 "is_configured": true, 00:23:04.013 "data_offset": 0, 00:23:04.013 "data_size": 65536 00:23:04.013 } 00:23:04.013 ] 00:23:04.013 }' 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.013 23:04:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:04.581 [2024-12-09 23:04:20.186541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:04.581 "name": "Existed_Raid", 00:23:04.581 "aliases": [ 00:23:04.581 "8f6bb4e1-96fc-42c2-9363-18cd9fa5d03f" 00:23:04.581 ], 00:23:04.581 "product_name": "Raid Volume", 00:23:04.581 "block_size": 512, 00:23:04.581 "num_blocks": 196608, 00:23:04.581 "uuid": "8f6bb4e1-96fc-42c2-9363-18cd9fa5d03f", 00:23:04.581 "assigned_rate_limits": { 00:23:04.581 "rw_ios_per_sec": 0, 00:23:04.581 "rw_mbytes_per_sec": 0, 00:23:04.581 "r_mbytes_per_sec": 0, 00:23:04.581 "w_mbytes_per_sec": 0 00:23:04.581 }, 00:23:04.581 "claimed": false, 00:23:04.581 "zoned": false, 00:23:04.581 "supported_io_types": { 00:23:04.581 "read": true, 00:23:04.581 "write": true, 00:23:04.581 "unmap": false, 00:23:04.581 "flush": false, 00:23:04.581 "reset": true, 00:23:04.581 "nvme_admin": false, 00:23:04.581 "nvme_io": false, 00:23:04.581 "nvme_io_md": false, 00:23:04.581 "write_zeroes": true, 00:23:04.581 "zcopy": false, 00:23:04.581 "get_zone_info": false, 00:23:04.581 "zone_management": false, 00:23:04.581 "zone_append": false, 00:23:04.581 "compare": false, 00:23:04.581 "compare_and_write": false, 00:23:04.581 "abort": false, 00:23:04.581 "seek_hole": false, 00:23:04.581 "seek_data": false, 00:23:04.581 "copy": false, 00:23:04.581 "nvme_iov_md": false 00:23:04.581 }, 00:23:04.581 "driver_specific": { 00:23:04.581 "raid": { 00:23:04.581 "uuid": "8f6bb4e1-96fc-42c2-9363-18cd9fa5d03f", 00:23:04.581 "strip_size_kb": 64, 00:23:04.581 "state": "online", 00:23:04.581 "raid_level": "raid5f", 00:23:04.581 "superblock": false, 00:23:04.581 "num_base_bdevs": 4, 00:23:04.581 "num_base_bdevs_discovered": 4, 00:23:04.581 "num_base_bdevs_operational": 4, 00:23:04.581 "base_bdevs_list": [ 00:23:04.581 { 00:23:04.581 "name": "BaseBdev1", 00:23:04.581 "uuid": "35422d8c-9ce7-4af0-8e76-91af921cf1cd", 00:23:04.581 "is_configured": true, 00:23:04.581 "data_offset": 0, 00:23:04.581 "data_size": 65536 00:23:04.581 }, 00:23:04.581 { 00:23:04.581 "name": "BaseBdev2", 00:23:04.581 "uuid": "a1844d37-7623-4b78-90c9-7c134802de3d", 00:23:04.581 "is_configured": true, 00:23:04.581 "data_offset": 0, 00:23:04.581 "data_size": 65536 00:23:04.581 }, 00:23:04.581 { 00:23:04.581 "name": "BaseBdev3", 00:23:04.581 "uuid": "ad06e0ed-1d95-4989-9c0b-5609f9ac0d22", 00:23:04.581 "is_configured": true, 00:23:04.581 "data_offset": 0, 00:23:04.581 "data_size": 65536 00:23:04.581 }, 00:23:04.581 { 00:23:04.581 "name": "BaseBdev4", 00:23:04.581 "uuid": "aebfdafc-35d8-4e57-bf37-0ef38744f018", 00:23:04.581 "is_configured": true, 00:23:04.581 "data_offset": 0, 00:23:04.581 "data_size": 65536 00:23:04.581 } 00:23:04.581 ] 00:23:04.581 } 00:23:04.581 } 00:23:04.581 }' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:04.581 BaseBdev2 00:23:04.581 BaseBdev3 00:23:04.581 BaseBdev4' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.581 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.840 [2024-12-09 23:04:20.501845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:04.840 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.841 "name": "Existed_Raid", 00:23:04.841 "uuid": "8f6bb4e1-96fc-42c2-9363-18cd9fa5d03f", 00:23:04.841 "strip_size_kb": 64, 00:23:04.841 "state": "online", 00:23:04.841 "raid_level": "raid5f", 00:23:04.841 "superblock": false, 00:23:04.841 "num_base_bdevs": 4, 00:23:04.841 "num_base_bdevs_discovered": 3, 00:23:04.841 "num_base_bdevs_operational": 3, 00:23:04.841 "base_bdevs_list": [ 00:23:04.841 { 00:23:04.841 "name": null, 00:23:04.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.841 "is_configured": false, 00:23:04.841 "data_offset": 0, 00:23:04.841 "data_size": 65536 00:23:04.841 }, 00:23:04.841 { 00:23:04.841 "name": "BaseBdev2", 00:23:04.841 "uuid": "a1844d37-7623-4b78-90c9-7c134802de3d", 00:23:04.841 "is_configured": true, 00:23:04.841 "data_offset": 0, 00:23:04.841 "data_size": 65536 00:23:04.841 }, 00:23:04.841 { 00:23:04.841 "name": "BaseBdev3", 00:23:04.841 "uuid": "ad06e0ed-1d95-4989-9c0b-5609f9ac0d22", 00:23:04.841 "is_configured": true, 00:23:04.841 "data_offset": 0, 00:23:04.841 "data_size": 65536 00:23:04.841 }, 00:23:04.841 { 00:23:04.841 "name": "BaseBdev4", 00:23:04.841 "uuid": "aebfdafc-35d8-4e57-bf37-0ef38744f018", 00:23:04.841 "is_configured": true, 00:23:04.841 "data_offset": 0, 00:23:04.841 "data_size": 65536 00:23:04.841 } 00:23:04.841 ] 00:23:04.841 }' 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.841 23:04:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.408 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.408 [2024-12-09 23:04:21.186610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:05.408 [2024-12-09 23:04:21.186735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.743 [2024-12-09 23:04:21.302853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 [2024-12-09 23:04:21.366810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.743 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.000 [2024-12-09 23:04:21.542770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:06.000 [2024-12-09 23:04:21.542853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:06.000 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.000 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:06.000 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:06.000 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 BaseBdev2 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 [ 00:23:06.001 { 00:23:06.001 "name": "BaseBdev2", 00:23:06.001 "aliases": [ 00:23:06.001 "9e55c42c-beab-4f6c-98ab-f92a302b03bb" 00:23:06.001 ], 00:23:06.001 "product_name": "Malloc disk", 00:23:06.001 "block_size": 512, 00:23:06.001 "num_blocks": 65536, 00:23:06.001 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:06.001 "assigned_rate_limits": { 00:23:06.001 "rw_ios_per_sec": 0, 00:23:06.001 "rw_mbytes_per_sec": 0, 00:23:06.001 "r_mbytes_per_sec": 0, 00:23:06.001 "w_mbytes_per_sec": 0 00:23:06.001 }, 00:23:06.001 "claimed": false, 00:23:06.001 "zoned": false, 00:23:06.001 "supported_io_types": { 00:23:06.001 "read": true, 00:23:06.001 "write": true, 00:23:06.001 "unmap": true, 00:23:06.001 "flush": true, 00:23:06.001 "reset": true, 00:23:06.001 "nvme_admin": false, 00:23:06.001 "nvme_io": false, 00:23:06.001 "nvme_io_md": false, 00:23:06.001 "write_zeroes": true, 00:23:06.001 "zcopy": true, 00:23:06.001 "get_zone_info": false, 00:23:06.001 "zone_management": false, 00:23:06.001 "zone_append": false, 00:23:06.001 "compare": false, 00:23:06.001 "compare_and_write": false, 00:23:06.001 "abort": true, 00:23:06.001 "seek_hole": false, 00:23:06.001 "seek_data": false, 00:23:06.001 "copy": true, 00:23:06.001 "nvme_iov_md": false 00:23:06.001 }, 00:23:06.001 "memory_domains": [ 00:23:06.001 { 00:23:06.001 "dma_device_id": "system", 00:23:06.001 "dma_device_type": 1 00:23:06.001 }, 00:23:06.001 { 00:23:06.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.001 "dma_device_type": 2 00:23:06.001 } 00:23:06.001 ], 00:23:06.001 "driver_specific": {} 00:23:06.001 } 00:23:06.001 ] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 BaseBdev3 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.001 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.259 [ 00:23:06.259 { 00:23:06.259 "name": "BaseBdev3", 00:23:06.259 "aliases": [ 00:23:06.259 "22e63243-b4c6-4a06-8972-5eb9d6936734" 00:23:06.259 ], 00:23:06.259 "product_name": "Malloc disk", 00:23:06.259 "block_size": 512, 00:23:06.259 "num_blocks": 65536, 00:23:06.259 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:06.259 "assigned_rate_limits": { 00:23:06.259 "rw_ios_per_sec": 0, 00:23:06.259 "rw_mbytes_per_sec": 0, 00:23:06.259 "r_mbytes_per_sec": 0, 00:23:06.259 "w_mbytes_per_sec": 0 00:23:06.259 }, 00:23:06.259 "claimed": false, 00:23:06.259 "zoned": false, 00:23:06.259 "supported_io_types": { 00:23:06.259 "read": true, 00:23:06.259 "write": true, 00:23:06.259 "unmap": true, 00:23:06.259 "flush": true, 00:23:06.259 "reset": true, 00:23:06.259 "nvme_admin": false, 00:23:06.259 "nvme_io": false, 00:23:06.259 "nvme_io_md": false, 00:23:06.259 "write_zeroes": true, 00:23:06.259 "zcopy": true, 00:23:06.259 "get_zone_info": false, 00:23:06.259 "zone_management": false, 00:23:06.259 "zone_append": false, 00:23:06.259 "compare": false, 00:23:06.259 "compare_and_write": false, 00:23:06.259 "abort": true, 00:23:06.259 "seek_hole": false, 00:23:06.259 "seek_data": false, 00:23:06.259 "copy": true, 00:23:06.259 "nvme_iov_md": false 00:23:06.259 }, 00:23:06.259 "memory_domains": [ 00:23:06.259 { 00:23:06.259 "dma_device_id": "system", 00:23:06.259 "dma_device_type": 1 00:23:06.259 }, 00:23:06.259 { 00:23:06.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.259 "dma_device_type": 2 00:23:06.259 } 00:23:06.259 ], 00:23:06.259 "driver_specific": {} 00:23:06.259 } 00:23:06.259 ] 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.259 BaseBdev4 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:06.259 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.260 [ 00:23:06.260 { 00:23:06.260 "name": "BaseBdev4", 00:23:06.260 "aliases": [ 00:23:06.260 "c1e4432b-6f2f-4fb2-be55-b20425a39196" 00:23:06.260 ], 00:23:06.260 "product_name": "Malloc disk", 00:23:06.260 "block_size": 512, 00:23:06.260 "num_blocks": 65536, 00:23:06.260 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:06.260 "assigned_rate_limits": { 00:23:06.260 "rw_ios_per_sec": 0, 00:23:06.260 "rw_mbytes_per_sec": 0, 00:23:06.260 "r_mbytes_per_sec": 0, 00:23:06.260 "w_mbytes_per_sec": 0 00:23:06.260 }, 00:23:06.260 "claimed": false, 00:23:06.260 "zoned": false, 00:23:06.260 "supported_io_types": { 00:23:06.260 "read": true, 00:23:06.260 "write": true, 00:23:06.260 "unmap": true, 00:23:06.260 "flush": true, 00:23:06.260 "reset": true, 00:23:06.260 "nvme_admin": false, 00:23:06.260 "nvme_io": false, 00:23:06.260 "nvme_io_md": false, 00:23:06.260 "write_zeroes": true, 00:23:06.260 "zcopy": true, 00:23:06.260 "get_zone_info": false, 00:23:06.260 "zone_management": false, 00:23:06.260 "zone_append": false, 00:23:06.260 "compare": false, 00:23:06.260 "compare_and_write": false, 00:23:06.260 "abort": true, 00:23:06.260 "seek_hole": false, 00:23:06.260 "seek_data": false, 00:23:06.260 "copy": true, 00:23:06.260 "nvme_iov_md": false 00:23:06.260 }, 00:23:06.260 "memory_domains": [ 00:23:06.260 { 00:23:06.260 "dma_device_id": "system", 00:23:06.260 "dma_device_type": 1 00:23:06.260 }, 00:23:06.260 { 00:23:06.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.260 "dma_device_type": 2 00:23:06.260 } 00:23:06.260 ], 00:23:06.260 "driver_specific": {} 00:23:06.260 } 00:23:06.260 ] 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.260 [2024-12-09 23:04:21.959192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:06.260 [2024-12-09 23:04:21.959329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:06.260 [2024-12-09 23:04:21.959408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:06.260 [2024-12-09 23:04:21.961666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:06.260 [2024-12-09 23:04:21.961827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.260 23:04:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.260 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.260 "name": "Existed_Raid", 00:23:06.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.260 "strip_size_kb": 64, 00:23:06.260 "state": "configuring", 00:23:06.260 "raid_level": "raid5f", 00:23:06.260 "superblock": false, 00:23:06.260 "num_base_bdevs": 4, 00:23:06.260 "num_base_bdevs_discovered": 3, 00:23:06.260 "num_base_bdevs_operational": 4, 00:23:06.260 "base_bdevs_list": [ 00:23:06.260 { 00:23:06.260 "name": "BaseBdev1", 00:23:06.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.260 "is_configured": false, 00:23:06.260 "data_offset": 0, 00:23:06.260 "data_size": 0 00:23:06.260 }, 00:23:06.260 { 00:23:06.260 "name": "BaseBdev2", 00:23:06.260 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:06.260 "is_configured": true, 00:23:06.260 "data_offset": 0, 00:23:06.260 "data_size": 65536 00:23:06.260 }, 00:23:06.260 { 00:23:06.260 "name": "BaseBdev3", 00:23:06.260 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:06.260 "is_configured": true, 00:23:06.260 "data_offset": 0, 00:23:06.260 "data_size": 65536 00:23:06.260 }, 00:23:06.260 { 00:23:06.260 "name": "BaseBdev4", 00:23:06.260 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:06.260 "is_configured": true, 00:23:06.260 "data_offset": 0, 00:23:06.260 "data_size": 65536 00:23:06.260 } 00:23:06.260 ] 00:23:06.260 }' 00:23:06.260 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.260 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.827 [2024-12-09 23:04:22.422448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.827 "name": "Existed_Raid", 00:23:06.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.827 "strip_size_kb": 64, 00:23:06.827 "state": "configuring", 00:23:06.827 "raid_level": "raid5f", 00:23:06.827 "superblock": false, 00:23:06.827 "num_base_bdevs": 4, 00:23:06.827 "num_base_bdevs_discovered": 2, 00:23:06.827 "num_base_bdevs_operational": 4, 00:23:06.827 "base_bdevs_list": [ 00:23:06.827 { 00:23:06.827 "name": "BaseBdev1", 00:23:06.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.827 "is_configured": false, 00:23:06.827 "data_offset": 0, 00:23:06.827 "data_size": 0 00:23:06.827 }, 00:23:06.827 { 00:23:06.827 "name": null, 00:23:06.827 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:06.827 "is_configured": false, 00:23:06.827 "data_offset": 0, 00:23:06.827 "data_size": 65536 00:23:06.827 }, 00:23:06.827 { 00:23:06.827 "name": "BaseBdev3", 00:23:06.827 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:06.827 "is_configured": true, 00:23:06.827 "data_offset": 0, 00:23:06.827 "data_size": 65536 00:23:06.827 }, 00:23:06.827 { 00:23:06.827 "name": "BaseBdev4", 00:23:06.827 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:06.827 "is_configured": true, 00:23:06.827 "data_offset": 0, 00:23:06.827 "data_size": 65536 00:23:06.827 } 00:23:06.827 ] 00:23:06.827 }' 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.827 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.085 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.343 [2024-12-09 23:04:22.977530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.343 BaseBdev1 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.343 23:04:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.343 [ 00:23:07.343 { 00:23:07.343 "name": "BaseBdev1", 00:23:07.343 "aliases": [ 00:23:07.343 "324b3698-797d-4a8d-a611-1b5116df7665" 00:23:07.343 ], 00:23:07.343 "product_name": "Malloc disk", 00:23:07.343 "block_size": 512, 00:23:07.343 "num_blocks": 65536, 00:23:07.343 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:07.343 "assigned_rate_limits": { 00:23:07.343 "rw_ios_per_sec": 0, 00:23:07.343 "rw_mbytes_per_sec": 0, 00:23:07.343 "r_mbytes_per_sec": 0, 00:23:07.343 "w_mbytes_per_sec": 0 00:23:07.343 }, 00:23:07.343 "claimed": true, 00:23:07.343 "claim_type": "exclusive_write", 00:23:07.343 "zoned": false, 00:23:07.343 "supported_io_types": { 00:23:07.343 "read": true, 00:23:07.343 "write": true, 00:23:07.343 "unmap": true, 00:23:07.343 "flush": true, 00:23:07.343 "reset": true, 00:23:07.343 "nvme_admin": false, 00:23:07.343 "nvme_io": false, 00:23:07.343 "nvme_io_md": false, 00:23:07.343 "write_zeroes": true, 00:23:07.343 "zcopy": true, 00:23:07.343 "get_zone_info": false, 00:23:07.343 "zone_management": false, 00:23:07.343 "zone_append": false, 00:23:07.343 "compare": false, 00:23:07.343 "compare_and_write": false, 00:23:07.343 "abort": true, 00:23:07.343 "seek_hole": false, 00:23:07.343 "seek_data": false, 00:23:07.343 "copy": true, 00:23:07.343 "nvme_iov_md": false 00:23:07.343 }, 00:23:07.343 "memory_domains": [ 00:23:07.343 { 00:23:07.343 "dma_device_id": "system", 00:23:07.343 "dma_device_type": 1 00:23:07.343 }, 00:23:07.343 { 00:23:07.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.343 "dma_device_type": 2 00:23:07.343 } 00:23:07.343 ], 00:23:07.343 "driver_specific": {} 00:23:07.343 } 00:23:07.343 ] 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.343 "name": "Existed_Raid", 00:23:07.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.343 "strip_size_kb": 64, 00:23:07.343 "state": "configuring", 00:23:07.343 "raid_level": "raid5f", 00:23:07.343 "superblock": false, 00:23:07.343 "num_base_bdevs": 4, 00:23:07.343 "num_base_bdevs_discovered": 3, 00:23:07.343 "num_base_bdevs_operational": 4, 00:23:07.343 "base_bdevs_list": [ 00:23:07.343 { 00:23:07.343 "name": "BaseBdev1", 00:23:07.343 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:07.343 "is_configured": true, 00:23:07.343 "data_offset": 0, 00:23:07.343 "data_size": 65536 00:23:07.343 }, 00:23:07.343 { 00:23:07.343 "name": null, 00:23:07.343 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:07.343 "is_configured": false, 00:23:07.343 "data_offset": 0, 00:23:07.343 "data_size": 65536 00:23:07.343 }, 00:23:07.343 { 00:23:07.343 "name": "BaseBdev3", 00:23:07.343 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:07.343 "is_configured": true, 00:23:07.343 "data_offset": 0, 00:23:07.343 "data_size": 65536 00:23:07.343 }, 00:23:07.343 { 00:23:07.343 "name": "BaseBdev4", 00:23:07.343 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:07.343 "is_configured": true, 00:23:07.343 "data_offset": 0, 00:23:07.343 "data_size": 65536 00:23:07.343 } 00:23:07.343 ] 00:23:07.343 }' 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.343 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.908 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.909 [2024-12-09 23:04:23.536735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.909 "name": "Existed_Raid", 00:23:07.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.909 "strip_size_kb": 64, 00:23:07.909 "state": "configuring", 00:23:07.909 "raid_level": "raid5f", 00:23:07.909 "superblock": false, 00:23:07.909 "num_base_bdevs": 4, 00:23:07.909 "num_base_bdevs_discovered": 2, 00:23:07.909 "num_base_bdevs_operational": 4, 00:23:07.909 "base_bdevs_list": [ 00:23:07.909 { 00:23:07.909 "name": "BaseBdev1", 00:23:07.909 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:07.909 "is_configured": true, 00:23:07.909 "data_offset": 0, 00:23:07.909 "data_size": 65536 00:23:07.909 }, 00:23:07.909 { 00:23:07.909 "name": null, 00:23:07.909 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:07.909 "is_configured": false, 00:23:07.909 "data_offset": 0, 00:23:07.909 "data_size": 65536 00:23:07.909 }, 00:23:07.909 { 00:23:07.909 "name": null, 00:23:07.909 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:07.909 "is_configured": false, 00:23:07.909 "data_offset": 0, 00:23:07.909 "data_size": 65536 00:23:07.909 }, 00:23:07.909 { 00:23:07.909 "name": "BaseBdev4", 00:23:07.909 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:07.909 "is_configured": true, 00:23:07.909 "data_offset": 0, 00:23:07.909 "data_size": 65536 00:23:07.909 } 00:23:07.909 ] 00:23:07.909 }' 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.909 23:04:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 [2024-12-09 23:04:24.064082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.479 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.479 "name": "Existed_Raid", 00:23:08.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.479 "strip_size_kb": 64, 00:23:08.479 "state": "configuring", 00:23:08.479 "raid_level": "raid5f", 00:23:08.479 "superblock": false, 00:23:08.479 "num_base_bdevs": 4, 00:23:08.479 "num_base_bdevs_discovered": 3, 00:23:08.479 "num_base_bdevs_operational": 4, 00:23:08.479 "base_bdevs_list": [ 00:23:08.479 { 00:23:08.479 "name": "BaseBdev1", 00:23:08.479 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:08.479 "is_configured": true, 00:23:08.479 "data_offset": 0, 00:23:08.479 "data_size": 65536 00:23:08.479 }, 00:23:08.479 { 00:23:08.479 "name": null, 00:23:08.479 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:08.479 "is_configured": false, 00:23:08.479 "data_offset": 0, 00:23:08.479 "data_size": 65536 00:23:08.479 }, 00:23:08.479 { 00:23:08.479 "name": "BaseBdev3", 00:23:08.479 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:08.479 "is_configured": true, 00:23:08.479 "data_offset": 0, 00:23:08.479 "data_size": 65536 00:23:08.479 }, 00:23:08.479 { 00:23:08.479 "name": "BaseBdev4", 00:23:08.479 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:08.479 "is_configured": true, 00:23:08.479 "data_offset": 0, 00:23:08.479 "data_size": 65536 00:23:08.480 } 00:23:08.480 ] 00:23:08.480 }' 00:23:08.480 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.480 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.738 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.738 [2024-12-09 23:04:24.591719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.997 "name": "Existed_Raid", 00:23:08.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.997 "strip_size_kb": 64, 00:23:08.997 "state": "configuring", 00:23:08.997 "raid_level": "raid5f", 00:23:08.997 "superblock": false, 00:23:08.997 "num_base_bdevs": 4, 00:23:08.997 "num_base_bdevs_discovered": 2, 00:23:08.997 "num_base_bdevs_operational": 4, 00:23:08.997 "base_bdevs_list": [ 00:23:08.997 { 00:23:08.997 "name": null, 00:23:08.997 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:08.997 "is_configured": false, 00:23:08.997 "data_offset": 0, 00:23:08.997 "data_size": 65536 00:23:08.997 }, 00:23:08.997 { 00:23:08.997 "name": null, 00:23:08.997 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:08.997 "is_configured": false, 00:23:08.997 "data_offset": 0, 00:23:08.997 "data_size": 65536 00:23:08.997 }, 00:23:08.997 { 00:23:08.997 "name": "BaseBdev3", 00:23:08.997 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:08.997 "is_configured": true, 00:23:08.997 "data_offset": 0, 00:23:08.997 "data_size": 65536 00:23:08.997 }, 00:23:08.997 { 00:23:08.997 "name": "BaseBdev4", 00:23:08.997 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:08.997 "is_configured": true, 00:23:08.997 "data_offset": 0, 00:23:08.997 "data_size": 65536 00:23:08.997 } 00:23:08.997 ] 00:23:08.997 }' 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.997 23:04:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.562 [2024-12-09 23:04:25.183567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.562 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.562 "name": "Existed_Raid", 00:23:09.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.562 "strip_size_kb": 64, 00:23:09.562 "state": "configuring", 00:23:09.562 "raid_level": "raid5f", 00:23:09.562 "superblock": false, 00:23:09.562 "num_base_bdevs": 4, 00:23:09.562 "num_base_bdevs_discovered": 3, 00:23:09.563 "num_base_bdevs_operational": 4, 00:23:09.563 "base_bdevs_list": [ 00:23:09.563 { 00:23:09.563 "name": null, 00:23:09.563 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:09.563 "is_configured": false, 00:23:09.563 "data_offset": 0, 00:23:09.563 "data_size": 65536 00:23:09.563 }, 00:23:09.563 { 00:23:09.563 "name": "BaseBdev2", 00:23:09.563 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:09.563 "is_configured": true, 00:23:09.563 "data_offset": 0, 00:23:09.563 "data_size": 65536 00:23:09.563 }, 00:23:09.563 { 00:23:09.563 "name": "BaseBdev3", 00:23:09.563 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:09.563 "is_configured": true, 00:23:09.563 "data_offset": 0, 00:23:09.563 "data_size": 65536 00:23:09.563 }, 00:23:09.563 { 00:23:09.563 "name": "BaseBdev4", 00:23:09.563 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:09.563 "is_configured": true, 00:23:09.563 "data_offset": 0, 00:23:09.563 "data_size": 65536 00:23:09.563 } 00:23:09.563 ] 00:23:09.563 }' 00:23:09.563 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.563 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 324b3698-797d-4a8d-a611-1b5116df7665 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.133 [2024-12-09 23:04:25.851361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:10.133 [2024-12-09 23:04:25.851438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:10.133 [2024-12-09 23:04:25.851447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:10.133 [2024-12-09 23:04:25.851785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:10.133 [2024-12-09 23:04:25.860665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:10.133 [2024-12-09 23:04:25.860721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:10.133 [2024-12-09 23:04:25.861108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.133 NewBaseBdev 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.133 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.133 [ 00:23:10.133 { 00:23:10.133 "name": "NewBaseBdev", 00:23:10.133 "aliases": [ 00:23:10.133 "324b3698-797d-4a8d-a611-1b5116df7665" 00:23:10.133 ], 00:23:10.133 "product_name": "Malloc disk", 00:23:10.133 "block_size": 512, 00:23:10.133 "num_blocks": 65536, 00:23:10.133 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:10.133 "assigned_rate_limits": { 00:23:10.133 "rw_ios_per_sec": 0, 00:23:10.133 "rw_mbytes_per_sec": 0, 00:23:10.133 "r_mbytes_per_sec": 0, 00:23:10.133 "w_mbytes_per_sec": 0 00:23:10.133 }, 00:23:10.133 "claimed": true, 00:23:10.133 "claim_type": "exclusive_write", 00:23:10.133 "zoned": false, 00:23:10.133 "supported_io_types": { 00:23:10.133 "read": true, 00:23:10.134 "write": true, 00:23:10.134 "unmap": true, 00:23:10.134 "flush": true, 00:23:10.134 "reset": true, 00:23:10.134 "nvme_admin": false, 00:23:10.134 "nvme_io": false, 00:23:10.134 "nvme_io_md": false, 00:23:10.134 "write_zeroes": true, 00:23:10.134 "zcopy": true, 00:23:10.134 "get_zone_info": false, 00:23:10.134 "zone_management": false, 00:23:10.134 "zone_append": false, 00:23:10.134 "compare": false, 00:23:10.134 "compare_and_write": false, 00:23:10.134 "abort": true, 00:23:10.134 "seek_hole": false, 00:23:10.134 "seek_data": false, 00:23:10.134 "copy": true, 00:23:10.134 "nvme_iov_md": false 00:23:10.134 }, 00:23:10.134 "memory_domains": [ 00:23:10.134 { 00:23:10.134 "dma_device_id": "system", 00:23:10.134 "dma_device_type": 1 00:23:10.134 }, 00:23:10.134 { 00:23:10.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.134 "dma_device_type": 2 00:23:10.134 } 00:23:10.134 ], 00:23:10.134 "driver_specific": {} 00:23:10.134 } 00:23:10.134 ] 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.134 "name": "Existed_Raid", 00:23:10.134 "uuid": "9d4dac2e-d85e-4126-8674-ec7f85ec982c", 00:23:10.134 "strip_size_kb": 64, 00:23:10.134 "state": "online", 00:23:10.134 "raid_level": "raid5f", 00:23:10.134 "superblock": false, 00:23:10.134 "num_base_bdevs": 4, 00:23:10.134 "num_base_bdevs_discovered": 4, 00:23:10.134 "num_base_bdevs_operational": 4, 00:23:10.134 "base_bdevs_list": [ 00:23:10.134 { 00:23:10.134 "name": "NewBaseBdev", 00:23:10.134 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:10.134 "is_configured": true, 00:23:10.134 "data_offset": 0, 00:23:10.134 "data_size": 65536 00:23:10.134 }, 00:23:10.134 { 00:23:10.134 "name": "BaseBdev2", 00:23:10.134 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:10.134 "is_configured": true, 00:23:10.134 "data_offset": 0, 00:23:10.134 "data_size": 65536 00:23:10.134 }, 00:23:10.134 { 00:23:10.134 "name": "BaseBdev3", 00:23:10.134 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:10.134 "is_configured": true, 00:23:10.134 "data_offset": 0, 00:23:10.134 "data_size": 65536 00:23:10.134 }, 00:23:10.134 { 00:23:10.134 "name": "BaseBdev4", 00:23:10.134 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:10.134 "is_configured": true, 00:23:10.134 "data_offset": 0, 00:23:10.134 "data_size": 65536 00:23:10.134 } 00:23:10.134 ] 00:23:10.134 }' 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.134 23:04:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.703 [2024-12-09 23:04:26.406922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:10.703 "name": "Existed_Raid", 00:23:10.703 "aliases": [ 00:23:10.703 "9d4dac2e-d85e-4126-8674-ec7f85ec982c" 00:23:10.703 ], 00:23:10.703 "product_name": "Raid Volume", 00:23:10.703 "block_size": 512, 00:23:10.703 "num_blocks": 196608, 00:23:10.703 "uuid": "9d4dac2e-d85e-4126-8674-ec7f85ec982c", 00:23:10.703 "assigned_rate_limits": { 00:23:10.703 "rw_ios_per_sec": 0, 00:23:10.703 "rw_mbytes_per_sec": 0, 00:23:10.703 "r_mbytes_per_sec": 0, 00:23:10.703 "w_mbytes_per_sec": 0 00:23:10.703 }, 00:23:10.703 "claimed": false, 00:23:10.703 "zoned": false, 00:23:10.703 "supported_io_types": { 00:23:10.703 "read": true, 00:23:10.703 "write": true, 00:23:10.703 "unmap": false, 00:23:10.703 "flush": false, 00:23:10.703 "reset": true, 00:23:10.703 "nvme_admin": false, 00:23:10.703 "nvme_io": false, 00:23:10.703 "nvme_io_md": false, 00:23:10.703 "write_zeroes": true, 00:23:10.703 "zcopy": false, 00:23:10.703 "get_zone_info": false, 00:23:10.703 "zone_management": false, 00:23:10.703 "zone_append": false, 00:23:10.703 "compare": false, 00:23:10.703 "compare_and_write": false, 00:23:10.703 "abort": false, 00:23:10.703 "seek_hole": false, 00:23:10.703 "seek_data": false, 00:23:10.703 "copy": false, 00:23:10.703 "nvme_iov_md": false 00:23:10.703 }, 00:23:10.703 "driver_specific": { 00:23:10.703 "raid": { 00:23:10.703 "uuid": "9d4dac2e-d85e-4126-8674-ec7f85ec982c", 00:23:10.703 "strip_size_kb": 64, 00:23:10.703 "state": "online", 00:23:10.703 "raid_level": "raid5f", 00:23:10.703 "superblock": false, 00:23:10.703 "num_base_bdevs": 4, 00:23:10.703 "num_base_bdevs_discovered": 4, 00:23:10.703 "num_base_bdevs_operational": 4, 00:23:10.703 "base_bdevs_list": [ 00:23:10.703 { 00:23:10.703 "name": "NewBaseBdev", 00:23:10.703 "uuid": "324b3698-797d-4a8d-a611-1b5116df7665", 00:23:10.703 "is_configured": true, 00:23:10.703 "data_offset": 0, 00:23:10.703 "data_size": 65536 00:23:10.703 }, 00:23:10.703 { 00:23:10.703 "name": "BaseBdev2", 00:23:10.703 "uuid": "9e55c42c-beab-4f6c-98ab-f92a302b03bb", 00:23:10.703 "is_configured": true, 00:23:10.703 "data_offset": 0, 00:23:10.703 "data_size": 65536 00:23:10.703 }, 00:23:10.703 { 00:23:10.703 "name": "BaseBdev3", 00:23:10.703 "uuid": "22e63243-b4c6-4a06-8972-5eb9d6936734", 00:23:10.703 "is_configured": true, 00:23:10.703 "data_offset": 0, 00:23:10.703 "data_size": 65536 00:23:10.703 }, 00:23:10.703 { 00:23:10.703 "name": "BaseBdev4", 00:23:10.703 "uuid": "c1e4432b-6f2f-4fb2-be55-b20425a39196", 00:23:10.703 "is_configured": true, 00:23:10.703 "data_offset": 0, 00:23:10.703 "data_size": 65536 00:23:10.703 } 00:23:10.703 ] 00:23:10.703 } 00:23:10.703 } 00:23:10.703 }' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:10.703 BaseBdev2 00:23:10.703 BaseBdev3 00:23:10.703 BaseBdev4' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.703 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.968 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.968 [2024-12-09 23:04:26.766640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:10.968 [2024-12-09 23:04:26.766686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.968 [2024-12-09 23:04:26.766784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.968 [2024-12-09 23:04:26.767134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.968 [2024-12-09 23:04:26.767147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83455 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83455 ']' 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83455 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83455 00:23:10.969 killing process with pid 83455 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83455' 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83455 00:23:10.969 [2024-12-09 23:04:26.816754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:10.969 23:04:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83455 00:23:11.537 [2024-12-09 23:04:27.297733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:12.914 00:23:12.914 real 0m12.743s 00:23:12.914 user 0m20.031s 00:23:12.914 sys 0m2.250s 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.914 ************************************ 00:23:12.914 END TEST raid5f_state_function_test 00:23:12.914 ************************************ 00:23:12.914 23:04:28 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:12.914 23:04:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:12.914 23:04:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.914 23:04:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.914 ************************************ 00:23:12.914 START TEST raid5f_state_function_test_sb 00:23:12.914 ************************************ 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84138 00:23:12.914 Process raid pid: 84138 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84138' 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84138 00:23:12.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84138 ']' 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.914 23:04:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.172 [2024-12-09 23:04:28.856644] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:23:13.172 [2024-12-09 23:04:28.857528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.172 [2024-12-09 23:04:29.025872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.430 [2024-12-09 23:04:29.161688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.689 [2024-12-09 23:04:29.412822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.689 [2024-12-09 23:04:29.412975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.967 [2024-12-09 23:04:29.789421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:13.967 [2024-12-09 23:04:29.789512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:13.967 [2024-12-09 23:04:29.789524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:13.967 [2024-12-09 23:04:29.789537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:13.967 [2024-12-09 23:04:29.789545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:13.967 [2024-12-09 23:04:29.789556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:13.967 [2024-12-09 23:04:29.789563] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:13.967 [2024-12-09 23:04:29.789574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.967 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.225 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.225 "name": "Existed_Raid", 00:23:14.225 "uuid": "c13cb830-769c-43fe-b4cd-ff899c4017bb", 00:23:14.225 "strip_size_kb": 64, 00:23:14.225 "state": "configuring", 00:23:14.225 "raid_level": "raid5f", 00:23:14.225 "superblock": true, 00:23:14.225 "num_base_bdevs": 4, 00:23:14.225 "num_base_bdevs_discovered": 0, 00:23:14.225 "num_base_bdevs_operational": 4, 00:23:14.225 "base_bdevs_list": [ 00:23:14.225 { 00:23:14.225 "name": "BaseBdev1", 00:23:14.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.225 "is_configured": false, 00:23:14.225 "data_offset": 0, 00:23:14.225 "data_size": 0 00:23:14.225 }, 00:23:14.225 { 00:23:14.225 "name": "BaseBdev2", 00:23:14.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.225 "is_configured": false, 00:23:14.225 "data_offset": 0, 00:23:14.225 "data_size": 0 00:23:14.225 }, 00:23:14.225 { 00:23:14.225 "name": "BaseBdev3", 00:23:14.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.225 "is_configured": false, 00:23:14.225 "data_offset": 0, 00:23:14.225 "data_size": 0 00:23:14.225 }, 00:23:14.225 { 00:23:14.225 "name": "BaseBdev4", 00:23:14.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.225 "is_configured": false, 00:23:14.225 "data_offset": 0, 00:23:14.225 "data_size": 0 00:23:14.225 } 00:23:14.225 ] 00:23:14.225 }' 00:23:14.225 23:04:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.225 23:04:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.484 [2024-12-09 23:04:30.284732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:14.484 [2024-12-09 23:04:30.284841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.484 [2024-12-09 23:04:30.292743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.484 [2024-12-09 23:04:30.292844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.484 [2024-12-09 23:04:30.292894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.484 [2024-12-09 23:04:30.292924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.484 [2024-12-09 23:04:30.292957] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:14.484 [2024-12-09 23:04:30.292985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:14.484 [2024-12-09 23:04:30.293025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:14.484 [2024-12-09 23:04:30.293054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.484 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.744 [2024-12-09 23:04:30.344856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.744 BaseBdev1 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.744 [ 00:23:14.744 { 00:23:14.744 "name": "BaseBdev1", 00:23:14.744 "aliases": [ 00:23:14.744 "cdb77c9d-b93f-4643-8a48-5cceea3715f0" 00:23:14.744 ], 00:23:14.744 "product_name": "Malloc disk", 00:23:14.744 "block_size": 512, 00:23:14.744 "num_blocks": 65536, 00:23:14.744 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:14.744 "assigned_rate_limits": { 00:23:14.744 "rw_ios_per_sec": 0, 00:23:14.744 "rw_mbytes_per_sec": 0, 00:23:14.744 "r_mbytes_per_sec": 0, 00:23:14.744 "w_mbytes_per_sec": 0 00:23:14.744 }, 00:23:14.744 "claimed": true, 00:23:14.744 "claim_type": "exclusive_write", 00:23:14.744 "zoned": false, 00:23:14.744 "supported_io_types": { 00:23:14.744 "read": true, 00:23:14.744 "write": true, 00:23:14.744 "unmap": true, 00:23:14.744 "flush": true, 00:23:14.744 "reset": true, 00:23:14.744 "nvme_admin": false, 00:23:14.744 "nvme_io": false, 00:23:14.744 "nvme_io_md": false, 00:23:14.744 "write_zeroes": true, 00:23:14.744 "zcopy": true, 00:23:14.744 "get_zone_info": false, 00:23:14.744 "zone_management": false, 00:23:14.744 "zone_append": false, 00:23:14.744 "compare": false, 00:23:14.744 "compare_and_write": false, 00:23:14.744 "abort": true, 00:23:14.744 "seek_hole": false, 00:23:14.744 "seek_data": false, 00:23:14.744 "copy": true, 00:23:14.744 "nvme_iov_md": false 00:23:14.744 }, 00:23:14.744 "memory_domains": [ 00:23:14.744 { 00:23:14.744 "dma_device_id": "system", 00:23:14.744 "dma_device_type": 1 00:23:14.744 }, 00:23:14.744 { 00:23:14.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.744 "dma_device_type": 2 00:23:14.744 } 00:23:14.744 ], 00:23:14.744 "driver_specific": {} 00:23:14.744 } 00:23:14.744 ] 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.744 "name": "Existed_Raid", 00:23:14.744 "uuid": "ed9b048f-5d31-46eb-94b2-ce6499d4f081", 00:23:14.744 "strip_size_kb": 64, 00:23:14.744 "state": "configuring", 00:23:14.744 "raid_level": "raid5f", 00:23:14.744 "superblock": true, 00:23:14.744 "num_base_bdevs": 4, 00:23:14.744 "num_base_bdevs_discovered": 1, 00:23:14.744 "num_base_bdevs_operational": 4, 00:23:14.744 "base_bdevs_list": [ 00:23:14.744 { 00:23:14.744 "name": "BaseBdev1", 00:23:14.744 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:14.744 "is_configured": true, 00:23:14.744 "data_offset": 2048, 00:23:14.744 "data_size": 63488 00:23:14.744 }, 00:23:14.744 { 00:23:14.744 "name": "BaseBdev2", 00:23:14.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.744 "is_configured": false, 00:23:14.744 "data_offset": 0, 00:23:14.744 "data_size": 0 00:23:14.744 }, 00:23:14.744 { 00:23:14.744 "name": "BaseBdev3", 00:23:14.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.744 "is_configured": false, 00:23:14.744 "data_offset": 0, 00:23:14.744 "data_size": 0 00:23:14.744 }, 00:23:14.744 { 00:23:14.744 "name": "BaseBdev4", 00:23:14.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.744 "is_configured": false, 00:23:14.744 "data_offset": 0, 00:23:14.744 "data_size": 0 00:23:14.744 } 00:23:14.744 ] 00:23:14.744 }' 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.744 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.315 [2024-12-09 23:04:30.868715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:15.315 [2024-12-09 23:04:30.868882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.315 [2024-12-09 23:04:30.880783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.315 [2024-12-09 23:04:30.882878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.315 [2024-12-09 23:04:30.882930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.315 [2024-12-09 23:04:30.882941] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.315 [2024-12-09 23:04:30.882954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.315 [2024-12-09 23:04:30.882962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:15.315 [2024-12-09 23:04:30.882973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:15.315 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.316 "name": "Existed_Raid", 00:23:15.316 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:15.316 "strip_size_kb": 64, 00:23:15.316 "state": "configuring", 00:23:15.316 "raid_level": "raid5f", 00:23:15.316 "superblock": true, 00:23:15.316 "num_base_bdevs": 4, 00:23:15.316 "num_base_bdevs_discovered": 1, 00:23:15.316 "num_base_bdevs_operational": 4, 00:23:15.316 "base_bdevs_list": [ 00:23:15.316 { 00:23:15.316 "name": "BaseBdev1", 00:23:15.316 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:15.316 "is_configured": true, 00:23:15.316 "data_offset": 2048, 00:23:15.316 "data_size": 63488 00:23:15.316 }, 00:23:15.316 { 00:23:15.316 "name": "BaseBdev2", 00:23:15.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.316 "is_configured": false, 00:23:15.316 "data_offset": 0, 00:23:15.316 "data_size": 0 00:23:15.316 }, 00:23:15.316 { 00:23:15.316 "name": "BaseBdev3", 00:23:15.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.316 "is_configured": false, 00:23:15.316 "data_offset": 0, 00:23:15.316 "data_size": 0 00:23:15.316 }, 00:23:15.316 { 00:23:15.316 "name": "BaseBdev4", 00:23:15.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.316 "is_configured": false, 00:23:15.316 "data_offset": 0, 00:23:15.316 "data_size": 0 00:23:15.316 } 00:23:15.316 ] 00:23:15.316 }' 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.316 23:04:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.581 [2024-12-09 23:04:31.380099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:15.581 BaseBdev2 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.581 [ 00:23:15.581 { 00:23:15.581 "name": "BaseBdev2", 00:23:15.581 "aliases": [ 00:23:15.581 "f60a277e-a6d6-428f-b1da-b925b9efe1eb" 00:23:15.581 ], 00:23:15.581 "product_name": "Malloc disk", 00:23:15.581 "block_size": 512, 00:23:15.581 "num_blocks": 65536, 00:23:15.581 "uuid": "f60a277e-a6d6-428f-b1da-b925b9efe1eb", 00:23:15.581 "assigned_rate_limits": { 00:23:15.581 "rw_ios_per_sec": 0, 00:23:15.581 "rw_mbytes_per_sec": 0, 00:23:15.581 "r_mbytes_per_sec": 0, 00:23:15.581 "w_mbytes_per_sec": 0 00:23:15.581 }, 00:23:15.581 "claimed": true, 00:23:15.581 "claim_type": "exclusive_write", 00:23:15.581 "zoned": false, 00:23:15.581 "supported_io_types": { 00:23:15.581 "read": true, 00:23:15.581 "write": true, 00:23:15.581 "unmap": true, 00:23:15.581 "flush": true, 00:23:15.581 "reset": true, 00:23:15.581 "nvme_admin": false, 00:23:15.581 "nvme_io": false, 00:23:15.581 "nvme_io_md": false, 00:23:15.581 "write_zeroes": true, 00:23:15.581 "zcopy": true, 00:23:15.581 "get_zone_info": false, 00:23:15.581 "zone_management": false, 00:23:15.581 "zone_append": false, 00:23:15.581 "compare": false, 00:23:15.581 "compare_and_write": false, 00:23:15.581 "abort": true, 00:23:15.581 "seek_hole": false, 00:23:15.581 "seek_data": false, 00:23:15.581 "copy": true, 00:23:15.581 "nvme_iov_md": false 00:23:15.581 }, 00:23:15.581 "memory_domains": [ 00:23:15.581 { 00:23:15.581 "dma_device_id": "system", 00:23:15.581 "dma_device_type": 1 00:23:15.581 }, 00:23:15.581 { 00:23:15.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.581 "dma_device_type": 2 00:23:15.581 } 00:23:15.581 ], 00:23:15.581 "driver_specific": {} 00:23:15.581 } 00:23:15.581 ] 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.581 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.844 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.844 "name": "Existed_Raid", 00:23:15.844 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:15.844 "strip_size_kb": 64, 00:23:15.844 "state": "configuring", 00:23:15.844 "raid_level": "raid5f", 00:23:15.844 "superblock": true, 00:23:15.844 "num_base_bdevs": 4, 00:23:15.844 "num_base_bdevs_discovered": 2, 00:23:15.844 "num_base_bdevs_operational": 4, 00:23:15.844 "base_bdevs_list": [ 00:23:15.844 { 00:23:15.844 "name": "BaseBdev1", 00:23:15.844 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:15.844 "is_configured": true, 00:23:15.844 "data_offset": 2048, 00:23:15.844 "data_size": 63488 00:23:15.844 }, 00:23:15.844 { 00:23:15.844 "name": "BaseBdev2", 00:23:15.844 "uuid": "f60a277e-a6d6-428f-b1da-b925b9efe1eb", 00:23:15.844 "is_configured": true, 00:23:15.844 "data_offset": 2048, 00:23:15.844 "data_size": 63488 00:23:15.844 }, 00:23:15.844 { 00:23:15.844 "name": "BaseBdev3", 00:23:15.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.844 "is_configured": false, 00:23:15.844 "data_offset": 0, 00:23:15.844 "data_size": 0 00:23:15.844 }, 00:23:15.844 { 00:23:15.844 "name": "BaseBdev4", 00:23:15.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.844 "is_configured": false, 00:23:15.844 "data_offset": 0, 00:23:15.844 "data_size": 0 00:23:15.844 } 00:23:15.844 ] 00:23:15.844 }' 00:23:15.844 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.844 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.104 [2024-12-09 23:04:31.946368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:16.104 BaseBdev3 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.104 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.361 [ 00:23:16.361 { 00:23:16.361 "name": "BaseBdev3", 00:23:16.361 "aliases": [ 00:23:16.361 "1830877f-4e67-44c5-b80b-f35c57bb055e" 00:23:16.361 ], 00:23:16.361 "product_name": "Malloc disk", 00:23:16.361 "block_size": 512, 00:23:16.361 "num_blocks": 65536, 00:23:16.361 "uuid": "1830877f-4e67-44c5-b80b-f35c57bb055e", 00:23:16.361 "assigned_rate_limits": { 00:23:16.361 "rw_ios_per_sec": 0, 00:23:16.361 "rw_mbytes_per_sec": 0, 00:23:16.361 "r_mbytes_per_sec": 0, 00:23:16.361 "w_mbytes_per_sec": 0 00:23:16.361 }, 00:23:16.361 "claimed": true, 00:23:16.361 "claim_type": "exclusive_write", 00:23:16.361 "zoned": false, 00:23:16.361 "supported_io_types": { 00:23:16.361 "read": true, 00:23:16.361 "write": true, 00:23:16.361 "unmap": true, 00:23:16.361 "flush": true, 00:23:16.361 "reset": true, 00:23:16.361 "nvme_admin": false, 00:23:16.361 "nvme_io": false, 00:23:16.361 "nvme_io_md": false, 00:23:16.361 "write_zeroes": true, 00:23:16.361 "zcopy": true, 00:23:16.361 "get_zone_info": false, 00:23:16.361 "zone_management": false, 00:23:16.361 "zone_append": false, 00:23:16.361 "compare": false, 00:23:16.361 "compare_and_write": false, 00:23:16.361 "abort": true, 00:23:16.361 "seek_hole": false, 00:23:16.361 "seek_data": false, 00:23:16.361 "copy": true, 00:23:16.361 "nvme_iov_md": false 00:23:16.361 }, 00:23:16.361 "memory_domains": [ 00:23:16.361 { 00:23:16.361 "dma_device_id": "system", 00:23:16.361 "dma_device_type": 1 00:23:16.361 }, 00:23:16.361 { 00:23:16.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.361 "dma_device_type": 2 00:23:16.361 } 00:23:16.361 ], 00:23:16.361 "driver_specific": {} 00:23:16.361 } 00:23:16.361 ] 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:16.361 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.362 23:04:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.362 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.362 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.362 "name": "Existed_Raid", 00:23:16.362 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:16.362 "strip_size_kb": 64, 00:23:16.362 "state": "configuring", 00:23:16.362 "raid_level": "raid5f", 00:23:16.362 "superblock": true, 00:23:16.362 "num_base_bdevs": 4, 00:23:16.362 "num_base_bdevs_discovered": 3, 00:23:16.362 "num_base_bdevs_operational": 4, 00:23:16.362 "base_bdevs_list": [ 00:23:16.362 { 00:23:16.362 "name": "BaseBdev1", 00:23:16.362 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:16.362 "is_configured": true, 00:23:16.362 "data_offset": 2048, 00:23:16.362 "data_size": 63488 00:23:16.362 }, 00:23:16.362 { 00:23:16.362 "name": "BaseBdev2", 00:23:16.362 "uuid": "f60a277e-a6d6-428f-b1da-b925b9efe1eb", 00:23:16.362 "is_configured": true, 00:23:16.362 "data_offset": 2048, 00:23:16.362 "data_size": 63488 00:23:16.362 }, 00:23:16.362 { 00:23:16.362 "name": "BaseBdev3", 00:23:16.362 "uuid": "1830877f-4e67-44c5-b80b-f35c57bb055e", 00:23:16.362 "is_configured": true, 00:23:16.362 "data_offset": 2048, 00:23:16.362 "data_size": 63488 00:23:16.362 }, 00:23:16.362 { 00:23:16.362 "name": "BaseBdev4", 00:23:16.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.362 "is_configured": false, 00:23:16.362 "data_offset": 0, 00:23:16.362 "data_size": 0 00:23:16.362 } 00:23:16.362 ] 00:23:16.362 }' 00:23:16.362 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.362 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.619 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:16.619 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.619 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.878 [2024-12-09 23:04:32.513182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:16.878 [2024-12-09 23:04:32.513519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:16.879 [2024-12-09 23:04:32.513538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:16.879 [2024-12-09 23:04:32.513844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:16.879 BaseBdev4 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.879 [2024-12-09 23:04:32.522864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:16.879 [2024-12-09 23:04:32.522988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:16.879 [2024-12-09 23:04:32.523348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.879 [ 00:23:16.879 { 00:23:16.879 "name": "BaseBdev4", 00:23:16.879 "aliases": [ 00:23:16.879 "6dbe4288-8797-404e-99c9-be586cac9042" 00:23:16.879 ], 00:23:16.879 "product_name": "Malloc disk", 00:23:16.879 "block_size": 512, 00:23:16.879 "num_blocks": 65536, 00:23:16.879 "uuid": "6dbe4288-8797-404e-99c9-be586cac9042", 00:23:16.879 "assigned_rate_limits": { 00:23:16.879 "rw_ios_per_sec": 0, 00:23:16.879 "rw_mbytes_per_sec": 0, 00:23:16.879 "r_mbytes_per_sec": 0, 00:23:16.879 "w_mbytes_per_sec": 0 00:23:16.879 }, 00:23:16.879 "claimed": true, 00:23:16.879 "claim_type": "exclusive_write", 00:23:16.879 "zoned": false, 00:23:16.879 "supported_io_types": { 00:23:16.879 "read": true, 00:23:16.879 "write": true, 00:23:16.879 "unmap": true, 00:23:16.879 "flush": true, 00:23:16.879 "reset": true, 00:23:16.879 "nvme_admin": false, 00:23:16.879 "nvme_io": false, 00:23:16.879 "nvme_io_md": false, 00:23:16.879 "write_zeroes": true, 00:23:16.879 "zcopy": true, 00:23:16.879 "get_zone_info": false, 00:23:16.879 "zone_management": false, 00:23:16.879 "zone_append": false, 00:23:16.879 "compare": false, 00:23:16.879 "compare_and_write": false, 00:23:16.879 "abort": true, 00:23:16.879 "seek_hole": false, 00:23:16.879 "seek_data": false, 00:23:16.879 "copy": true, 00:23:16.879 "nvme_iov_md": false 00:23:16.879 }, 00:23:16.879 "memory_domains": [ 00:23:16.879 { 00:23:16.879 "dma_device_id": "system", 00:23:16.879 "dma_device_type": 1 00:23:16.879 }, 00:23:16.879 { 00:23:16.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.879 "dma_device_type": 2 00:23:16.879 } 00:23:16.879 ], 00:23:16.879 "driver_specific": {} 00:23:16.879 } 00:23:16.879 ] 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.879 "name": "Existed_Raid", 00:23:16.879 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:16.879 "strip_size_kb": 64, 00:23:16.879 "state": "online", 00:23:16.879 "raid_level": "raid5f", 00:23:16.879 "superblock": true, 00:23:16.879 "num_base_bdevs": 4, 00:23:16.879 "num_base_bdevs_discovered": 4, 00:23:16.879 "num_base_bdevs_operational": 4, 00:23:16.879 "base_bdevs_list": [ 00:23:16.879 { 00:23:16.879 "name": "BaseBdev1", 00:23:16.879 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:16.879 "is_configured": true, 00:23:16.879 "data_offset": 2048, 00:23:16.879 "data_size": 63488 00:23:16.879 }, 00:23:16.879 { 00:23:16.879 "name": "BaseBdev2", 00:23:16.879 "uuid": "f60a277e-a6d6-428f-b1da-b925b9efe1eb", 00:23:16.879 "is_configured": true, 00:23:16.879 "data_offset": 2048, 00:23:16.879 "data_size": 63488 00:23:16.879 }, 00:23:16.879 { 00:23:16.879 "name": "BaseBdev3", 00:23:16.879 "uuid": "1830877f-4e67-44c5-b80b-f35c57bb055e", 00:23:16.879 "is_configured": true, 00:23:16.879 "data_offset": 2048, 00:23:16.879 "data_size": 63488 00:23:16.879 }, 00:23:16.879 { 00:23:16.879 "name": "BaseBdev4", 00:23:16.879 "uuid": "6dbe4288-8797-404e-99c9-be586cac9042", 00:23:16.879 "is_configured": true, 00:23:16.879 "data_offset": 2048, 00:23:16.879 "data_size": 63488 00:23:16.879 } 00:23:16.879 ] 00:23:16.879 }' 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.879 23:04:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.447 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:17.447 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:17.447 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:17.448 [2024-12-09 23:04:33.056091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.448 "name": "Existed_Raid", 00:23:17.448 "aliases": [ 00:23:17.448 "27ed55a6-04be-4219-93c7-28744346f98c" 00:23:17.448 ], 00:23:17.448 "product_name": "Raid Volume", 00:23:17.448 "block_size": 512, 00:23:17.448 "num_blocks": 190464, 00:23:17.448 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:17.448 "assigned_rate_limits": { 00:23:17.448 "rw_ios_per_sec": 0, 00:23:17.448 "rw_mbytes_per_sec": 0, 00:23:17.448 "r_mbytes_per_sec": 0, 00:23:17.448 "w_mbytes_per_sec": 0 00:23:17.448 }, 00:23:17.448 "claimed": false, 00:23:17.448 "zoned": false, 00:23:17.448 "supported_io_types": { 00:23:17.448 "read": true, 00:23:17.448 "write": true, 00:23:17.448 "unmap": false, 00:23:17.448 "flush": false, 00:23:17.448 "reset": true, 00:23:17.448 "nvme_admin": false, 00:23:17.448 "nvme_io": false, 00:23:17.448 "nvme_io_md": false, 00:23:17.448 "write_zeroes": true, 00:23:17.448 "zcopy": false, 00:23:17.448 "get_zone_info": false, 00:23:17.448 "zone_management": false, 00:23:17.448 "zone_append": false, 00:23:17.448 "compare": false, 00:23:17.448 "compare_and_write": false, 00:23:17.448 "abort": false, 00:23:17.448 "seek_hole": false, 00:23:17.448 "seek_data": false, 00:23:17.448 "copy": false, 00:23:17.448 "nvme_iov_md": false 00:23:17.448 }, 00:23:17.448 "driver_specific": { 00:23:17.448 "raid": { 00:23:17.448 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:17.448 "strip_size_kb": 64, 00:23:17.448 "state": "online", 00:23:17.448 "raid_level": "raid5f", 00:23:17.448 "superblock": true, 00:23:17.448 "num_base_bdevs": 4, 00:23:17.448 "num_base_bdevs_discovered": 4, 00:23:17.448 "num_base_bdevs_operational": 4, 00:23:17.448 "base_bdevs_list": [ 00:23:17.448 { 00:23:17.448 "name": "BaseBdev1", 00:23:17.448 "uuid": "cdb77c9d-b93f-4643-8a48-5cceea3715f0", 00:23:17.448 "is_configured": true, 00:23:17.448 "data_offset": 2048, 00:23:17.448 "data_size": 63488 00:23:17.448 }, 00:23:17.448 { 00:23:17.448 "name": "BaseBdev2", 00:23:17.448 "uuid": "f60a277e-a6d6-428f-b1da-b925b9efe1eb", 00:23:17.448 "is_configured": true, 00:23:17.448 "data_offset": 2048, 00:23:17.448 "data_size": 63488 00:23:17.448 }, 00:23:17.448 { 00:23:17.448 "name": "BaseBdev3", 00:23:17.448 "uuid": "1830877f-4e67-44c5-b80b-f35c57bb055e", 00:23:17.448 "is_configured": true, 00:23:17.448 "data_offset": 2048, 00:23:17.448 "data_size": 63488 00:23:17.448 }, 00:23:17.448 { 00:23:17.448 "name": "BaseBdev4", 00:23:17.448 "uuid": "6dbe4288-8797-404e-99c9-be586cac9042", 00:23:17.448 "is_configured": true, 00:23:17.448 "data_offset": 2048, 00:23:17.448 "data_size": 63488 00:23:17.448 } 00:23:17.448 ] 00:23:17.448 } 00:23:17.448 } 00:23:17.448 }' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:17.448 BaseBdev2 00:23:17.448 BaseBdev3 00:23:17.448 BaseBdev4' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.448 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.709 [2024-12-09 23:04:33.363479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.709 "name": "Existed_Raid", 00:23:17.709 "uuid": "27ed55a6-04be-4219-93c7-28744346f98c", 00:23:17.709 "strip_size_kb": 64, 00:23:17.709 "state": "online", 00:23:17.709 "raid_level": "raid5f", 00:23:17.709 "superblock": true, 00:23:17.709 "num_base_bdevs": 4, 00:23:17.709 "num_base_bdevs_discovered": 3, 00:23:17.709 "num_base_bdevs_operational": 3, 00:23:17.709 "base_bdevs_list": [ 00:23:17.709 { 00:23:17.709 "name": null, 00:23:17.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.709 "is_configured": false, 00:23:17.709 "data_offset": 0, 00:23:17.709 "data_size": 63488 00:23:17.709 }, 00:23:17.709 { 00:23:17.709 "name": "BaseBdev2", 00:23:17.709 "uuid": "f60a277e-a6d6-428f-b1da-b925b9efe1eb", 00:23:17.709 "is_configured": true, 00:23:17.709 "data_offset": 2048, 00:23:17.709 "data_size": 63488 00:23:17.709 }, 00:23:17.709 { 00:23:17.709 "name": "BaseBdev3", 00:23:17.709 "uuid": "1830877f-4e67-44c5-b80b-f35c57bb055e", 00:23:17.709 "is_configured": true, 00:23:17.709 "data_offset": 2048, 00:23:17.709 "data_size": 63488 00:23:17.709 }, 00:23:17.709 { 00:23:17.709 "name": "BaseBdev4", 00:23:17.709 "uuid": "6dbe4288-8797-404e-99c9-be586cac9042", 00:23:17.709 "is_configured": true, 00:23:17.709 "data_offset": 2048, 00:23:17.709 "data_size": 63488 00:23:17.709 } 00:23:17.709 ] 00:23:17.709 }' 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.709 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.277 23:04:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.277 [2024-12-09 23:04:33.982944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:18.277 [2024-12-09 23:04:33.983140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.277 [2024-12-09 23:04:34.089840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:18.277 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.535 [2024-12-09 23:04:34.149809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.535 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.535 [2024-12-09 23:04:34.318758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:18.535 [2024-12-09 23:04:34.318916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.793 BaseBdev2 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.793 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.793 [ 00:23:18.793 { 00:23:18.793 "name": "BaseBdev2", 00:23:18.793 "aliases": [ 00:23:18.793 "90f6f494-946d-4b17-86c8-06438b5cb2a7" 00:23:18.793 ], 00:23:18.793 "product_name": "Malloc disk", 00:23:18.793 "block_size": 512, 00:23:18.793 "num_blocks": 65536, 00:23:18.793 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:18.793 "assigned_rate_limits": { 00:23:18.793 "rw_ios_per_sec": 0, 00:23:18.793 "rw_mbytes_per_sec": 0, 00:23:18.793 "r_mbytes_per_sec": 0, 00:23:18.793 "w_mbytes_per_sec": 0 00:23:18.793 }, 00:23:18.793 "claimed": false, 00:23:18.793 "zoned": false, 00:23:18.793 "supported_io_types": { 00:23:18.793 "read": true, 00:23:18.793 "write": true, 00:23:18.793 "unmap": true, 00:23:18.793 "flush": true, 00:23:18.793 "reset": true, 00:23:18.793 "nvme_admin": false, 00:23:18.793 "nvme_io": false, 00:23:18.793 "nvme_io_md": false, 00:23:18.793 "write_zeroes": true, 00:23:18.793 "zcopy": true, 00:23:18.793 "get_zone_info": false, 00:23:18.793 "zone_management": false, 00:23:18.793 "zone_append": false, 00:23:18.793 "compare": false, 00:23:18.793 "compare_and_write": false, 00:23:18.793 "abort": true, 00:23:18.793 "seek_hole": false, 00:23:18.793 "seek_data": false, 00:23:18.793 "copy": true, 00:23:18.793 "nvme_iov_md": false 00:23:18.793 }, 00:23:18.793 "memory_domains": [ 00:23:18.793 { 00:23:18.793 "dma_device_id": "system", 00:23:18.794 "dma_device_type": 1 00:23:18.794 }, 00:23:18.794 { 00:23:18.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.794 "dma_device_type": 2 00:23:18.794 } 00:23:18.794 ], 00:23:18.794 "driver_specific": {} 00:23:18.794 } 00:23:18.794 ] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.794 BaseBdev3 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.794 [ 00:23:18.794 { 00:23:18.794 "name": "BaseBdev3", 00:23:18.794 "aliases": [ 00:23:18.794 "45d1d963-6c2e-4ae5-9211-61df6705390b" 00:23:18.794 ], 00:23:18.794 "product_name": "Malloc disk", 00:23:18.794 "block_size": 512, 00:23:18.794 "num_blocks": 65536, 00:23:18.794 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:18.794 "assigned_rate_limits": { 00:23:18.794 "rw_ios_per_sec": 0, 00:23:18.794 "rw_mbytes_per_sec": 0, 00:23:18.794 "r_mbytes_per_sec": 0, 00:23:18.794 "w_mbytes_per_sec": 0 00:23:18.794 }, 00:23:18.794 "claimed": false, 00:23:18.794 "zoned": false, 00:23:18.794 "supported_io_types": { 00:23:18.794 "read": true, 00:23:18.794 "write": true, 00:23:18.794 "unmap": true, 00:23:18.794 "flush": true, 00:23:18.794 "reset": true, 00:23:18.794 "nvme_admin": false, 00:23:18.794 "nvme_io": false, 00:23:18.794 "nvme_io_md": false, 00:23:18.794 "write_zeroes": true, 00:23:18.794 "zcopy": true, 00:23:18.794 "get_zone_info": false, 00:23:18.794 "zone_management": false, 00:23:18.794 "zone_append": false, 00:23:18.794 "compare": false, 00:23:18.794 "compare_and_write": false, 00:23:18.794 "abort": true, 00:23:18.794 "seek_hole": false, 00:23:18.794 "seek_data": false, 00:23:18.794 "copy": true, 00:23:18.794 "nvme_iov_md": false 00:23:18.794 }, 00:23:18.794 "memory_domains": [ 00:23:18.794 { 00:23:18.794 "dma_device_id": "system", 00:23:18.794 "dma_device_type": 1 00:23:18.794 }, 00:23:18.794 { 00:23:18.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.794 "dma_device_type": 2 00:23:18.794 } 00:23:18.794 ], 00:23:18.794 "driver_specific": {} 00:23:18.794 } 00:23:18.794 ] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.794 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 BaseBdev4 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 [ 00:23:19.052 { 00:23:19.052 "name": "BaseBdev4", 00:23:19.052 "aliases": [ 00:23:19.052 "383dba15-e10f-4dcd-ab41-8927f30a3adc" 00:23:19.052 ], 00:23:19.052 "product_name": "Malloc disk", 00:23:19.052 "block_size": 512, 00:23:19.052 "num_blocks": 65536, 00:23:19.052 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:19.052 "assigned_rate_limits": { 00:23:19.052 "rw_ios_per_sec": 0, 00:23:19.052 "rw_mbytes_per_sec": 0, 00:23:19.052 "r_mbytes_per_sec": 0, 00:23:19.052 "w_mbytes_per_sec": 0 00:23:19.052 }, 00:23:19.052 "claimed": false, 00:23:19.052 "zoned": false, 00:23:19.052 "supported_io_types": { 00:23:19.052 "read": true, 00:23:19.052 "write": true, 00:23:19.052 "unmap": true, 00:23:19.052 "flush": true, 00:23:19.052 "reset": true, 00:23:19.052 "nvme_admin": false, 00:23:19.052 "nvme_io": false, 00:23:19.052 "nvme_io_md": false, 00:23:19.052 "write_zeroes": true, 00:23:19.052 "zcopy": true, 00:23:19.052 "get_zone_info": false, 00:23:19.052 "zone_management": false, 00:23:19.052 "zone_append": false, 00:23:19.052 "compare": false, 00:23:19.052 "compare_and_write": false, 00:23:19.052 "abort": true, 00:23:19.052 "seek_hole": false, 00:23:19.052 "seek_data": false, 00:23:19.052 "copy": true, 00:23:19.052 "nvme_iov_md": false 00:23:19.052 }, 00:23:19.052 "memory_domains": [ 00:23:19.052 { 00:23:19.052 "dma_device_id": "system", 00:23:19.052 "dma_device_type": 1 00:23:19.052 }, 00:23:19.052 { 00:23:19.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.052 "dma_device_type": 2 00:23:19.052 } 00:23:19.052 ], 00:23:19.052 "driver_specific": {} 00:23:19.052 } 00:23:19.052 ] 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.052 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.052 [2024-12-09 23:04:34.740153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.053 [2024-12-09 23:04:34.740311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.053 [2024-12-09 23:04:34.740370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.053 [2024-12-09 23:04:34.742358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:19.053 [2024-12-09 23:04:34.742484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.053 "name": "Existed_Raid", 00:23:19.053 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:19.053 "strip_size_kb": 64, 00:23:19.053 "state": "configuring", 00:23:19.053 "raid_level": "raid5f", 00:23:19.053 "superblock": true, 00:23:19.053 "num_base_bdevs": 4, 00:23:19.053 "num_base_bdevs_discovered": 3, 00:23:19.053 "num_base_bdevs_operational": 4, 00:23:19.053 "base_bdevs_list": [ 00:23:19.053 { 00:23:19.053 "name": "BaseBdev1", 00:23:19.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.053 "is_configured": false, 00:23:19.053 "data_offset": 0, 00:23:19.053 "data_size": 0 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "name": "BaseBdev2", 00:23:19.053 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:19.053 "is_configured": true, 00:23:19.053 "data_offset": 2048, 00:23:19.053 "data_size": 63488 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "name": "BaseBdev3", 00:23:19.053 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:19.053 "is_configured": true, 00:23:19.053 "data_offset": 2048, 00:23:19.053 "data_size": 63488 00:23:19.053 }, 00:23:19.053 { 00:23:19.053 "name": "BaseBdev4", 00:23:19.053 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:19.053 "is_configured": true, 00:23:19.053 "data_offset": 2048, 00:23:19.053 "data_size": 63488 00:23:19.053 } 00:23:19.053 ] 00:23:19.053 }' 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.053 23:04:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.620 [2024-12-09 23:04:35.219339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.620 "name": "Existed_Raid", 00:23:19.620 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:19.620 "strip_size_kb": 64, 00:23:19.620 "state": "configuring", 00:23:19.620 "raid_level": "raid5f", 00:23:19.620 "superblock": true, 00:23:19.620 "num_base_bdevs": 4, 00:23:19.620 "num_base_bdevs_discovered": 2, 00:23:19.620 "num_base_bdevs_operational": 4, 00:23:19.620 "base_bdevs_list": [ 00:23:19.620 { 00:23:19.620 "name": "BaseBdev1", 00:23:19.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.620 "is_configured": false, 00:23:19.620 "data_offset": 0, 00:23:19.620 "data_size": 0 00:23:19.620 }, 00:23:19.620 { 00:23:19.620 "name": null, 00:23:19.620 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:19.620 "is_configured": false, 00:23:19.620 "data_offset": 0, 00:23:19.620 "data_size": 63488 00:23:19.620 }, 00:23:19.620 { 00:23:19.620 "name": "BaseBdev3", 00:23:19.620 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:19.620 "is_configured": true, 00:23:19.620 "data_offset": 2048, 00:23:19.620 "data_size": 63488 00:23:19.620 }, 00:23:19.620 { 00:23:19.620 "name": "BaseBdev4", 00:23:19.620 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:19.620 "is_configured": true, 00:23:19.620 "data_offset": 2048, 00:23:19.620 "data_size": 63488 00:23:19.620 } 00:23:19.620 ] 00:23:19.620 }' 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.620 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.879 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.138 [2024-12-09 23:04:35.751202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.138 BaseBdev1 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.138 [ 00:23:20.138 { 00:23:20.138 "name": "BaseBdev1", 00:23:20.138 "aliases": [ 00:23:20.138 "e299781c-fa61-42e8-a040-916829b6cac0" 00:23:20.138 ], 00:23:20.138 "product_name": "Malloc disk", 00:23:20.138 "block_size": 512, 00:23:20.138 "num_blocks": 65536, 00:23:20.138 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:20.138 "assigned_rate_limits": { 00:23:20.138 "rw_ios_per_sec": 0, 00:23:20.138 "rw_mbytes_per_sec": 0, 00:23:20.138 "r_mbytes_per_sec": 0, 00:23:20.138 "w_mbytes_per_sec": 0 00:23:20.138 }, 00:23:20.138 "claimed": true, 00:23:20.138 "claim_type": "exclusive_write", 00:23:20.138 "zoned": false, 00:23:20.138 "supported_io_types": { 00:23:20.138 "read": true, 00:23:20.138 "write": true, 00:23:20.138 "unmap": true, 00:23:20.138 "flush": true, 00:23:20.138 "reset": true, 00:23:20.138 "nvme_admin": false, 00:23:20.138 "nvme_io": false, 00:23:20.138 "nvme_io_md": false, 00:23:20.138 "write_zeroes": true, 00:23:20.138 "zcopy": true, 00:23:20.138 "get_zone_info": false, 00:23:20.138 "zone_management": false, 00:23:20.138 "zone_append": false, 00:23:20.138 "compare": false, 00:23:20.138 "compare_and_write": false, 00:23:20.138 "abort": true, 00:23:20.138 "seek_hole": false, 00:23:20.138 "seek_data": false, 00:23:20.138 "copy": true, 00:23:20.138 "nvme_iov_md": false 00:23:20.138 }, 00:23:20.138 "memory_domains": [ 00:23:20.138 { 00:23:20.138 "dma_device_id": "system", 00:23:20.138 "dma_device_type": 1 00:23:20.138 }, 00:23:20.138 { 00:23:20.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.138 "dma_device_type": 2 00:23:20.138 } 00:23:20.138 ], 00:23:20.138 "driver_specific": {} 00:23:20.138 } 00:23:20.138 ] 00:23:20.138 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.139 "name": "Existed_Raid", 00:23:20.139 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:20.139 "strip_size_kb": 64, 00:23:20.139 "state": "configuring", 00:23:20.139 "raid_level": "raid5f", 00:23:20.139 "superblock": true, 00:23:20.139 "num_base_bdevs": 4, 00:23:20.139 "num_base_bdevs_discovered": 3, 00:23:20.139 "num_base_bdevs_operational": 4, 00:23:20.139 "base_bdevs_list": [ 00:23:20.139 { 00:23:20.139 "name": "BaseBdev1", 00:23:20.139 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:20.139 "is_configured": true, 00:23:20.139 "data_offset": 2048, 00:23:20.139 "data_size": 63488 00:23:20.139 }, 00:23:20.139 { 00:23:20.139 "name": null, 00:23:20.139 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:20.139 "is_configured": false, 00:23:20.139 "data_offset": 0, 00:23:20.139 "data_size": 63488 00:23:20.139 }, 00:23:20.139 { 00:23:20.139 "name": "BaseBdev3", 00:23:20.139 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:20.139 "is_configured": true, 00:23:20.139 "data_offset": 2048, 00:23:20.139 "data_size": 63488 00:23:20.139 }, 00:23:20.139 { 00:23:20.139 "name": "BaseBdev4", 00:23:20.139 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:20.139 "is_configured": true, 00:23:20.139 "data_offset": 2048, 00:23:20.139 "data_size": 63488 00:23:20.139 } 00:23:20.139 ] 00:23:20.139 }' 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.139 23:04:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.458 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.458 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:20.458 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.458 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.458 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 [2024-12-09 23:04:36.306407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.717 "name": "Existed_Raid", 00:23:20.717 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:20.717 "strip_size_kb": 64, 00:23:20.717 "state": "configuring", 00:23:20.717 "raid_level": "raid5f", 00:23:20.717 "superblock": true, 00:23:20.717 "num_base_bdevs": 4, 00:23:20.717 "num_base_bdevs_discovered": 2, 00:23:20.717 "num_base_bdevs_operational": 4, 00:23:20.717 "base_bdevs_list": [ 00:23:20.717 { 00:23:20.717 "name": "BaseBdev1", 00:23:20.717 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:20.717 "is_configured": true, 00:23:20.717 "data_offset": 2048, 00:23:20.717 "data_size": 63488 00:23:20.717 }, 00:23:20.717 { 00:23:20.717 "name": null, 00:23:20.717 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:20.717 "is_configured": false, 00:23:20.717 "data_offset": 0, 00:23:20.717 "data_size": 63488 00:23:20.717 }, 00:23:20.717 { 00:23:20.717 "name": null, 00:23:20.717 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:20.717 "is_configured": false, 00:23:20.717 "data_offset": 0, 00:23:20.717 "data_size": 63488 00:23:20.717 }, 00:23:20.717 { 00:23:20.717 "name": "BaseBdev4", 00:23:20.717 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:20.717 "is_configured": true, 00:23:20.717 "data_offset": 2048, 00:23:20.717 "data_size": 63488 00:23:20.717 } 00:23:20.717 ] 00:23:20.717 }' 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.717 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.976 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:20.976 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.976 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.976 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.235 [2024-12-09 23:04:36.841504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.235 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.236 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.236 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.236 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.236 "name": "Existed_Raid", 00:23:21.236 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:21.236 "strip_size_kb": 64, 00:23:21.236 "state": "configuring", 00:23:21.236 "raid_level": "raid5f", 00:23:21.236 "superblock": true, 00:23:21.236 "num_base_bdevs": 4, 00:23:21.236 "num_base_bdevs_discovered": 3, 00:23:21.236 "num_base_bdevs_operational": 4, 00:23:21.236 "base_bdevs_list": [ 00:23:21.236 { 00:23:21.236 "name": "BaseBdev1", 00:23:21.236 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:21.236 "is_configured": true, 00:23:21.236 "data_offset": 2048, 00:23:21.236 "data_size": 63488 00:23:21.236 }, 00:23:21.236 { 00:23:21.236 "name": null, 00:23:21.236 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:21.236 "is_configured": false, 00:23:21.236 "data_offset": 0, 00:23:21.236 "data_size": 63488 00:23:21.236 }, 00:23:21.236 { 00:23:21.236 "name": "BaseBdev3", 00:23:21.236 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:21.236 "is_configured": true, 00:23:21.236 "data_offset": 2048, 00:23:21.236 "data_size": 63488 00:23:21.236 }, 00:23:21.236 { 00:23:21.236 "name": "BaseBdev4", 00:23:21.236 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:21.236 "is_configured": true, 00:23:21.236 "data_offset": 2048, 00:23:21.236 "data_size": 63488 00:23:21.236 } 00:23:21.236 ] 00:23:21.236 }' 00:23:21.236 23:04:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.236 23:04:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.494 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.494 [2024-12-09 23:04:37.308770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.752 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.753 "name": "Existed_Raid", 00:23:21.753 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:21.753 "strip_size_kb": 64, 00:23:21.753 "state": "configuring", 00:23:21.753 "raid_level": "raid5f", 00:23:21.753 "superblock": true, 00:23:21.753 "num_base_bdevs": 4, 00:23:21.753 "num_base_bdevs_discovered": 2, 00:23:21.753 "num_base_bdevs_operational": 4, 00:23:21.753 "base_bdevs_list": [ 00:23:21.753 { 00:23:21.753 "name": null, 00:23:21.753 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:21.753 "is_configured": false, 00:23:21.753 "data_offset": 0, 00:23:21.753 "data_size": 63488 00:23:21.753 }, 00:23:21.753 { 00:23:21.753 "name": null, 00:23:21.753 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:21.753 "is_configured": false, 00:23:21.753 "data_offset": 0, 00:23:21.753 "data_size": 63488 00:23:21.753 }, 00:23:21.753 { 00:23:21.753 "name": "BaseBdev3", 00:23:21.753 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:21.753 "is_configured": true, 00:23:21.753 "data_offset": 2048, 00:23:21.753 "data_size": 63488 00:23:21.753 }, 00:23:21.753 { 00:23:21.753 "name": "BaseBdev4", 00:23:21.753 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:21.753 "is_configured": true, 00:23:21.753 "data_offset": 2048, 00:23:21.753 "data_size": 63488 00:23:21.753 } 00:23:21.753 ] 00:23:21.753 }' 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.753 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.322 [2024-12-09 23:04:37.933490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.322 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.323 "name": "Existed_Raid", 00:23:22.323 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:22.323 "strip_size_kb": 64, 00:23:22.323 "state": "configuring", 00:23:22.323 "raid_level": "raid5f", 00:23:22.323 "superblock": true, 00:23:22.323 "num_base_bdevs": 4, 00:23:22.323 "num_base_bdevs_discovered": 3, 00:23:22.323 "num_base_bdevs_operational": 4, 00:23:22.323 "base_bdevs_list": [ 00:23:22.323 { 00:23:22.323 "name": null, 00:23:22.323 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:22.323 "is_configured": false, 00:23:22.323 "data_offset": 0, 00:23:22.323 "data_size": 63488 00:23:22.323 }, 00:23:22.323 { 00:23:22.323 "name": "BaseBdev2", 00:23:22.323 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:22.323 "is_configured": true, 00:23:22.323 "data_offset": 2048, 00:23:22.323 "data_size": 63488 00:23:22.323 }, 00:23:22.323 { 00:23:22.323 "name": "BaseBdev3", 00:23:22.323 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:22.323 "is_configured": true, 00:23:22.323 "data_offset": 2048, 00:23:22.323 "data_size": 63488 00:23:22.323 }, 00:23:22.323 { 00:23:22.323 "name": "BaseBdev4", 00:23:22.323 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:22.323 "is_configured": true, 00:23:22.323 "data_offset": 2048, 00:23:22.323 "data_size": 63488 00:23:22.323 } 00:23:22.323 ] 00:23:22.323 }' 00:23:22.323 23:04:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.323 23:04:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e299781c-fa61-42e8-a040-916829b6cac0 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.892 [2024-12-09 23:04:38.570118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:22.892 [2024-12-09 23:04:38.570377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:22.892 [2024-12-09 23:04:38.570391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:22.892 [2024-12-09 23:04:38.570706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:22.892 NewBaseBdev 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.892 [2024-12-09 23:04:38.578786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:22.892 [2024-12-09 23:04:38.578871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:22.892 [2024-12-09 23:04:38.579107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.892 [ 00:23:22.892 { 00:23:22.892 "name": "NewBaseBdev", 00:23:22.892 "aliases": [ 00:23:22.892 "e299781c-fa61-42e8-a040-916829b6cac0" 00:23:22.892 ], 00:23:22.892 "product_name": "Malloc disk", 00:23:22.892 "block_size": 512, 00:23:22.892 "num_blocks": 65536, 00:23:22.892 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:22.892 "assigned_rate_limits": { 00:23:22.892 "rw_ios_per_sec": 0, 00:23:22.892 "rw_mbytes_per_sec": 0, 00:23:22.892 "r_mbytes_per_sec": 0, 00:23:22.892 "w_mbytes_per_sec": 0 00:23:22.892 }, 00:23:22.892 "claimed": true, 00:23:22.892 "claim_type": "exclusive_write", 00:23:22.892 "zoned": false, 00:23:22.892 "supported_io_types": { 00:23:22.892 "read": true, 00:23:22.892 "write": true, 00:23:22.892 "unmap": true, 00:23:22.892 "flush": true, 00:23:22.892 "reset": true, 00:23:22.892 "nvme_admin": false, 00:23:22.892 "nvme_io": false, 00:23:22.892 "nvme_io_md": false, 00:23:22.892 "write_zeroes": true, 00:23:22.892 "zcopy": true, 00:23:22.892 "get_zone_info": false, 00:23:22.892 "zone_management": false, 00:23:22.892 "zone_append": false, 00:23:22.892 "compare": false, 00:23:22.892 "compare_and_write": false, 00:23:22.892 "abort": true, 00:23:22.892 "seek_hole": false, 00:23:22.892 "seek_data": false, 00:23:22.892 "copy": true, 00:23:22.892 "nvme_iov_md": false 00:23:22.892 }, 00:23:22.892 "memory_domains": [ 00:23:22.892 { 00:23:22.892 "dma_device_id": "system", 00:23:22.892 "dma_device_type": 1 00:23:22.892 }, 00:23:22.892 { 00:23:22.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.892 "dma_device_type": 2 00:23:22.892 } 00:23:22.892 ], 00:23:22.892 "driver_specific": {} 00:23:22.892 } 00:23:22.892 ] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.892 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.893 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.893 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.893 "name": "Existed_Raid", 00:23:22.893 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:22.893 "strip_size_kb": 64, 00:23:22.893 "state": "online", 00:23:22.893 "raid_level": "raid5f", 00:23:22.893 "superblock": true, 00:23:22.893 "num_base_bdevs": 4, 00:23:22.893 "num_base_bdevs_discovered": 4, 00:23:22.893 "num_base_bdevs_operational": 4, 00:23:22.893 "base_bdevs_list": [ 00:23:22.893 { 00:23:22.893 "name": "NewBaseBdev", 00:23:22.893 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:22.893 "is_configured": true, 00:23:22.893 "data_offset": 2048, 00:23:22.893 "data_size": 63488 00:23:22.893 }, 00:23:22.893 { 00:23:22.893 "name": "BaseBdev2", 00:23:22.893 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:22.893 "is_configured": true, 00:23:22.893 "data_offset": 2048, 00:23:22.893 "data_size": 63488 00:23:22.893 }, 00:23:22.893 { 00:23:22.893 "name": "BaseBdev3", 00:23:22.893 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:22.893 "is_configured": true, 00:23:22.893 "data_offset": 2048, 00:23:22.893 "data_size": 63488 00:23:22.893 }, 00:23:22.893 { 00:23:22.893 "name": "BaseBdev4", 00:23:22.893 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:22.893 "is_configured": true, 00:23:22.893 "data_offset": 2048, 00:23:22.893 "data_size": 63488 00:23:22.893 } 00:23:22.893 ] 00:23:22.893 }' 00:23:22.893 23:04:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.893 23:04:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.463 [2024-12-09 23:04:39.063888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:23.463 "name": "Existed_Raid", 00:23:23.463 "aliases": [ 00:23:23.463 "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772" 00:23:23.463 ], 00:23:23.463 "product_name": "Raid Volume", 00:23:23.463 "block_size": 512, 00:23:23.463 "num_blocks": 190464, 00:23:23.463 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:23.463 "assigned_rate_limits": { 00:23:23.463 "rw_ios_per_sec": 0, 00:23:23.463 "rw_mbytes_per_sec": 0, 00:23:23.463 "r_mbytes_per_sec": 0, 00:23:23.463 "w_mbytes_per_sec": 0 00:23:23.463 }, 00:23:23.463 "claimed": false, 00:23:23.463 "zoned": false, 00:23:23.463 "supported_io_types": { 00:23:23.463 "read": true, 00:23:23.463 "write": true, 00:23:23.463 "unmap": false, 00:23:23.463 "flush": false, 00:23:23.463 "reset": true, 00:23:23.463 "nvme_admin": false, 00:23:23.463 "nvme_io": false, 00:23:23.463 "nvme_io_md": false, 00:23:23.463 "write_zeroes": true, 00:23:23.463 "zcopy": false, 00:23:23.463 "get_zone_info": false, 00:23:23.463 "zone_management": false, 00:23:23.463 "zone_append": false, 00:23:23.463 "compare": false, 00:23:23.463 "compare_and_write": false, 00:23:23.463 "abort": false, 00:23:23.463 "seek_hole": false, 00:23:23.463 "seek_data": false, 00:23:23.463 "copy": false, 00:23:23.463 "nvme_iov_md": false 00:23:23.463 }, 00:23:23.463 "driver_specific": { 00:23:23.463 "raid": { 00:23:23.463 "uuid": "bd6a092f-2bfe-47d0-b1b7-9e4fd477a772", 00:23:23.463 "strip_size_kb": 64, 00:23:23.463 "state": "online", 00:23:23.463 "raid_level": "raid5f", 00:23:23.463 "superblock": true, 00:23:23.463 "num_base_bdevs": 4, 00:23:23.463 "num_base_bdevs_discovered": 4, 00:23:23.463 "num_base_bdevs_operational": 4, 00:23:23.463 "base_bdevs_list": [ 00:23:23.463 { 00:23:23.463 "name": "NewBaseBdev", 00:23:23.463 "uuid": "e299781c-fa61-42e8-a040-916829b6cac0", 00:23:23.463 "is_configured": true, 00:23:23.463 "data_offset": 2048, 00:23:23.463 "data_size": 63488 00:23:23.463 }, 00:23:23.463 { 00:23:23.463 "name": "BaseBdev2", 00:23:23.463 "uuid": "90f6f494-946d-4b17-86c8-06438b5cb2a7", 00:23:23.463 "is_configured": true, 00:23:23.463 "data_offset": 2048, 00:23:23.463 "data_size": 63488 00:23:23.463 }, 00:23:23.463 { 00:23:23.463 "name": "BaseBdev3", 00:23:23.463 "uuid": "45d1d963-6c2e-4ae5-9211-61df6705390b", 00:23:23.463 "is_configured": true, 00:23:23.463 "data_offset": 2048, 00:23:23.463 "data_size": 63488 00:23:23.463 }, 00:23:23.463 { 00:23:23.463 "name": "BaseBdev4", 00:23:23.463 "uuid": "383dba15-e10f-4dcd-ab41-8927f30a3adc", 00:23:23.463 "is_configured": true, 00:23:23.463 "data_offset": 2048, 00:23:23.463 "data_size": 63488 00:23:23.463 } 00:23:23.463 ] 00:23:23.463 } 00:23:23.463 } 00:23:23.463 }' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:23.463 BaseBdev2 00:23:23.463 BaseBdev3 00:23:23.463 BaseBdev4' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.463 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.723 [2024-12-09 23:04:39.375140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:23.723 [2024-12-09 23:04:39.375238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.723 [2024-12-09 23:04:39.375365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.723 [2024-12-09 23:04:39.375750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.723 [2024-12-09 23:04:39.375817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84138 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84138 ']' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84138 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84138 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.723 killing process with pid 84138 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84138' 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84138 00:23:23.723 [2024-12-09 23:04:39.425467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.723 23:04:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84138 00:23:24.292 [2024-12-09 23:04:39.871716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.671 23:04:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:25.671 00:23:25.671 real 0m12.377s 00:23:25.671 user 0m19.505s 00:23:25.671 sys 0m2.360s 00:23:25.671 ************************************ 00:23:25.671 END TEST raid5f_state_function_test_sb 00:23:25.671 ************************************ 00:23:25.671 23:04:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.671 23:04:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.671 23:04:41 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:25.671 23:04:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:25.671 23:04:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.671 23:04:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.671 ************************************ 00:23:25.671 START TEST raid5f_superblock_test 00:23:25.671 ************************************ 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84815 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84815 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84815 ']' 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.671 23:04:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.671 [2024-12-09 23:04:41.289295] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:23:25.671 [2024-12-09 23:04:41.289535] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84815 ] 00:23:25.671 [2024-12-09 23:04:41.464517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.931 [2024-12-09 23:04:41.583596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.190 [2024-12-09 23:04:41.793840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.190 [2024-12-09 23:04:41.793958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.448 malloc1 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.448 [2024-12-09 23:04:42.209403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:26.448 [2024-12-09 23:04:42.209560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.448 [2024-12-09 23:04:42.209615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:26.448 [2024-12-09 23:04:42.209685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.448 [2024-12-09 23:04:42.211994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.448 [2024-12-09 23:04:42.212072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:26.448 pt1 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.448 malloc2 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.448 [2024-12-09 23:04:42.272699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:26.448 [2024-12-09 23:04:42.272768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.448 [2024-12-09 23:04:42.272796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:26.448 [2024-12-09 23:04:42.272807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.448 [2024-12-09 23:04:42.275284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.448 [2024-12-09 23:04:42.275328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:26.448 pt2 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:26.448 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:26.449 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.449 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.747 malloc3 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.747 [2024-12-09 23:04:42.342068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:26.747 [2024-12-09 23:04:42.342186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.747 [2024-12-09 23:04:42.342234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:26.747 [2024-12-09 23:04:42.342299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.747 [2024-12-09 23:04:42.344611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.747 [2024-12-09 23:04:42.344693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:26.747 pt3 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.747 malloc4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.747 [2024-12-09 23:04:42.402107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:26.747 [2024-12-09 23:04:42.402223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.747 [2024-12-09 23:04:42.402267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:26.747 [2024-12-09 23:04:42.402299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.747 [2024-12-09 23:04:42.404447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.747 [2024-12-09 23:04:42.404555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:26.747 pt4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.747 [2024-12-09 23:04:42.414123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:26.747 [2024-12-09 23:04:42.415959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:26.747 [2024-12-09 23:04:42.416046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:26.747 [2024-12-09 23:04:42.416097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:26.747 [2024-12-09 23:04:42.416321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:26.747 [2024-12-09 23:04:42.416337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:26.747 [2024-12-09 23:04:42.416630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:26.747 [2024-12-09 23:04:42.424404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:26.747 [2024-12-09 23:04:42.424428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:26.747 [2024-12-09 23:04:42.424697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.747 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.748 "name": "raid_bdev1", 00:23:26.748 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:26.748 "strip_size_kb": 64, 00:23:26.748 "state": "online", 00:23:26.748 "raid_level": "raid5f", 00:23:26.748 "superblock": true, 00:23:26.748 "num_base_bdevs": 4, 00:23:26.748 "num_base_bdevs_discovered": 4, 00:23:26.748 "num_base_bdevs_operational": 4, 00:23:26.748 "base_bdevs_list": [ 00:23:26.748 { 00:23:26.748 "name": "pt1", 00:23:26.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.748 "is_configured": true, 00:23:26.748 "data_offset": 2048, 00:23:26.748 "data_size": 63488 00:23:26.748 }, 00:23:26.748 { 00:23:26.748 "name": "pt2", 00:23:26.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.748 "is_configured": true, 00:23:26.748 "data_offset": 2048, 00:23:26.748 "data_size": 63488 00:23:26.748 }, 00:23:26.748 { 00:23:26.748 "name": "pt3", 00:23:26.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:26.748 "is_configured": true, 00:23:26.748 "data_offset": 2048, 00:23:26.748 "data_size": 63488 00:23:26.748 }, 00:23:26.748 { 00:23:26.748 "name": "pt4", 00:23:26.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:26.748 "is_configured": true, 00:23:26.748 "data_offset": 2048, 00:23:26.748 "data_size": 63488 00:23:26.748 } 00:23:26.748 ] 00:23:26.748 }' 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.748 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 [2024-12-09 23:04:42.917340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:27.362 "name": "raid_bdev1", 00:23:27.362 "aliases": [ 00:23:27.362 "73658668-9c69-4510-953a-5b21cc669928" 00:23:27.362 ], 00:23:27.362 "product_name": "Raid Volume", 00:23:27.362 "block_size": 512, 00:23:27.362 "num_blocks": 190464, 00:23:27.362 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:27.362 "assigned_rate_limits": { 00:23:27.362 "rw_ios_per_sec": 0, 00:23:27.362 "rw_mbytes_per_sec": 0, 00:23:27.362 "r_mbytes_per_sec": 0, 00:23:27.362 "w_mbytes_per_sec": 0 00:23:27.362 }, 00:23:27.362 "claimed": false, 00:23:27.362 "zoned": false, 00:23:27.362 "supported_io_types": { 00:23:27.362 "read": true, 00:23:27.362 "write": true, 00:23:27.362 "unmap": false, 00:23:27.362 "flush": false, 00:23:27.362 "reset": true, 00:23:27.362 "nvme_admin": false, 00:23:27.362 "nvme_io": false, 00:23:27.362 "nvme_io_md": false, 00:23:27.362 "write_zeroes": true, 00:23:27.362 "zcopy": false, 00:23:27.362 "get_zone_info": false, 00:23:27.362 "zone_management": false, 00:23:27.362 "zone_append": false, 00:23:27.362 "compare": false, 00:23:27.362 "compare_and_write": false, 00:23:27.362 "abort": false, 00:23:27.362 "seek_hole": false, 00:23:27.362 "seek_data": false, 00:23:27.362 "copy": false, 00:23:27.362 "nvme_iov_md": false 00:23:27.362 }, 00:23:27.362 "driver_specific": { 00:23:27.362 "raid": { 00:23:27.362 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:27.362 "strip_size_kb": 64, 00:23:27.362 "state": "online", 00:23:27.362 "raid_level": "raid5f", 00:23:27.362 "superblock": true, 00:23:27.362 "num_base_bdevs": 4, 00:23:27.362 "num_base_bdevs_discovered": 4, 00:23:27.362 "num_base_bdevs_operational": 4, 00:23:27.362 "base_bdevs_list": [ 00:23:27.362 { 00:23:27.362 "name": "pt1", 00:23:27.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:27.362 "is_configured": true, 00:23:27.362 "data_offset": 2048, 00:23:27.362 "data_size": 63488 00:23:27.362 }, 00:23:27.362 { 00:23:27.362 "name": "pt2", 00:23:27.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:27.362 "is_configured": true, 00:23:27.362 "data_offset": 2048, 00:23:27.362 "data_size": 63488 00:23:27.362 }, 00:23:27.362 { 00:23:27.362 "name": "pt3", 00:23:27.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:27.362 "is_configured": true, 00:23:27.362 "data_offset": 2048, 00:23:27.362 "data_size": 63488 00:23:27.362 }, 00:23:27.362 { 00:23:27.362 "name": "pt4", 00:23:27.362 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:27.362 "is_configured": true, 00:23:27.362 "data_offset": 2048, 00:23:27.362 "data_size": 63488 00:23:27.362 } 00:23:27.362 ] 00:23:27.362 } 00:23:27.362 } 00:23:27.362 }' 00:23:27.362 23:04:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:27.362 pt2 00:23:27.362 pt3 00:23:27.362 pt4' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:27.362 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.363 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 [2024-12-09 23:04:43.256891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73658668-9c69-4510-953a-5b21cc669928 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73658668-9c69-4510-953a-5b21cc669928 ']' 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 [2024-12-09 23:04:43.288620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.622 [2024-12-09 23:04:43.288710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.622 [2024-12-09 23:04:43.288817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.622 [2024-12-09 23:04:43.288917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:27.622 [2024-12-09 23:04:43.288934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:27.622 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.623 [2024-12-09 23:04:43.452349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:27.623 [2024-12-09 23:04:43.454350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:27.623 [2024-12-09 23:04:43.454466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:27.623 [2024-12-09 23:04:43.454508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:27.623 [2024-12-09 23:04:43.454560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:27.623 [2024-12-09 23:04:43.454611] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:27.623 [2024-12-09 23:04:43.454631] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:27.623 [2024-12-09 23:04:43.454650] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:27.623 [2024-12-09 23:04:43.454664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.623 [2024-12-09 23:04:43.454675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:27.623 request: 00:23:27.623 { 00:23:27.623 "name": "raid_bdev1", 00:23:27.623 "raid_level": "raid5f", 00:23:27.623 "base_bdevs": [ 00:23:27.623 "malloc1", 00:23:27.623 "malloc2", 00:23:27.623 "malloc3", 00:23:27.623 "malloc4" 00:23:27.623 ], 00:23:27.623 "strip_size_kb": 64, 00:23:27.623 "superblock": false, 00:23:27.623 "method": "bdev_raid_create", 00:23:27.623 "req_id": 1 00:23:27.623 } 00:23:27.623 Got JSON-RPC error response 00:23:27.623 response: 00:23:27.623 { 00:23:27.623 "code": -17, 00:23:27.623 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:27.623 } 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.623 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.882 [2024-12-09 23:04:43.516218] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:27.882 [2024-12-09 23:04:43.516360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.882 [2024-12-09 23:04:43.516402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:27.882 [2024-12-09 23:04:43.516444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.882 [2024-12-09 23:04:43.518873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.882 [2024-12-09 23:04:43.518960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:27.882 [2024-12-09 23:04:43.519081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:27.882 [2024-12-09 23:04:43.519178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:27.882 pt1 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.882 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.882 "name": "raid_bdev1", 00:23:27.882 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:27.882 "strip_size_kb": 64, 00:23:27.882 "state": "configuring", 00:23:27.882 "raid_level": "raid5f", 00:23:27.882 "superblock": true, 00:23:27.882 "num_base_bdevs": 4, 00:23:27.882 "num_base_bdevs_discovered": 1, 00:23:27.882 "num_base_bdevs_operational": 4, 00:23:27.882 "base_bdevs_list": [ 00:23:27.882 { 00:23:27.883 "name": "pt1", 00:23:27.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:27.883 "is_configured": true, 00:23:27.883 "data_offset": 2048, 00:23:27.883 "data_size": 63488 00:23:27.883 }, 00:23:27.883 { 00:23:27.883 "name": null, 00:23:27.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:27.883 "is_configured": false, 00:23:27.883 "data_offset": 2048, 00:23:27.883 "data_size": 63488 00:23:27.883 }, 00:23:27.883 { 00:23:27.883 "name": null, 00:23:27.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:27.883 "is_configured": false, 00:23:27.883 "data_offset": 2048, 00:23:27.883 "data_size": 63488 00:23:27.883 }, 00:23:27.883 { 00:23:27.883 "name": null, 00:23:27.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:27.883 "is_configured": false, 00:23:27.883 "data_offset": 2048, 00:23:27.883 "data_size": 63488 00:23:27.883 } 00:23:27.883 ] 00:23:27.883 }' 00:23:27.883 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.883 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.142 [2024-12-09 23:04:43.983425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.142 [2024-12-09 23:04:43.983575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.142 [2024-12-09 23:04:43.983631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:28.142 [2024-12-09 23:04:43.983671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.142 [2024-12-09 23:04:43.984206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.142 [2024-12-09 23:04:43.984278] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.142 [2024-12-09 23:04:43.984409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:28.142 [2024-12-09 23:04:43.984489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:28.142 pt2 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.142 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.142 [2024-12-09 23:04:43.995398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:28.402 23:04:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.402 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:28.403 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.403 23:04:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.403 "name": "raid_bdev1", 00:23:28.403 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:28.403 "strip_size_kb": 64, 00:23:28.403 "state": "configuring", 00:23:28.403 "raid_level": "raid5f", 00:23:28.403 "superblock": true, 00:23:28.403 "num_base_bdevs": 4, 00:23:28.403 "num_base_bdevs_discovered": 1, 00:23:28.403 "num_base_bdevs_operational": 4, 00:23:28.403 "base_bdevs_list": [ 00:23:28.403 { 00:23:28.403 "name": "pt1", 00:23:28.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.403 "is_configured": true, 00:23:28.403 "data_offset": 2048, 00:23:28.403 "data_size": 63488 00:23:28.403 }, 00:23:28.403 { 00:23:28.403 "name": null, 00:23:28.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.403 "is_configured": false, 00:23:28.403 "data_offset": 0, 00:23:28.403 "data_size": 63488 00:23:28.403 }, 00:23:28.403 { 00:23:28.403 "name": null, 00:23:28.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.403 "is_configured": false, 00:23:28.403 "data_offset": 2048, 00:23:28.403 "data_size": 63488 00:23:28.403 }, 00:23:28.403 { 00:23:28.403 "name": null, 00:23:28.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:28.403 "is_configured": false, 00:23:28.403 "data_offset": 2048, 00:23:28.403 "data_size": 63488 00:23:28.403 } 00:23:28.403 ] 00:23:28.403 }' 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.403 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.661 [2024-12-09 23:04:44.442644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.661 [2024-12-09 23:04:44.442725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.661 [2024-12-09 23:04:44.442748] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:28.661 [2024-12-09 23:04:44.442758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.661 [2024-12-09 23:04:44.443259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.661 [2024-12-09 23:04:44.443285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.661 [2024-12-09 23:04:44.443379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:28.661 [2024-12-09 23:04:44.443403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:28.661 pt2 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.661 [2024-12-09 23:04:44.454630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.661 [2024-12-09 23:04:44.454701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.661 [2024-12-09 23:04:44.454727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:28.661 [2024-12-09 23:04:44.454737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.661 [2024-12-09 23:04:44.455244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.661 [2024-12-09 23:04:44.455270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.661 [2024-12-09 23:04:44.455358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:28.661 [2024-12-09 23:04:44.455390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:28.661 pt3 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.661 [2024-12-09 23:04:44.466572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:28.661 [2024-12-09 23:04:44.466630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.661 [2024-12-09 23:04:44.466653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:28.661 [2024-12-09 23:04:44.466663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.661 [2024-12-09 23:04:44.467170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.661 [2024-12-09 23:04:44.467189] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:28.661 [2024-12-09 23:04:44.467278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:28.661 [2024-12-09 23:04:44.467305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:28.661 [2024-12-09 23:04:44.467489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:28.661 [2024-12-09 23:04:44.467501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:28.661 [2024-12-09 23:04:44.467785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:28.661 [2024-12-09 23:04:44.476477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:28.661 [2024-12-09 23:04:44.476589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:28.661 [2024-12-09 23:04:44.476822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.661 pt4 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.661 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.919 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.919 "name": "raid_bdev1", 00:23:28.919 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:28.919 "strip_size_kb": 64, 00:23:28.919 "state": "online", 00:23:28.919 "raid_level": "raid5f", 00:23:28.919 "superblock": true, 00:23:28.919 "num_base_bdevs": 4, 00:23:28.919 "num_base_bdevs_discovered": 4, 00:23:28.919 "num_base_bdevs_operational": 4, 00:23:28.919 "base_bdevs_list": [ 00:23:28.919 { 00:23:28.919 "name": "pt1", 00:23:28.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.919 "is_configured": true, 00:23:28.919 "data_offset": 2048, 00:23:28.919 "data_size": 63488 00:23:28.919 }, 00:23:28.919 { 00:23:28.919 "name": "pt2", 00:23:28.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.919 "is_configured": true, 00:23:28.919 "data_offset": 2048, 00:23:28.919 "data_size": 63488 00:23:28.919 }, 00:23:28.919 { 00:23:28.919 "name": "pt3", 00:23:28.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.919 "is_configured": true, 00:23:28.919 "data_offset": 2048, 00:23:28.919 "data_size": 63488 00:23:28.919 }, 00:23:28.919 { 00:23:28.919 "name": "pt4", 00:23:28.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:28.919 "is_configured": true, 00:23:28.919 "data_offset": 2048, 00:23:28.919 "data_size": 63488 00:23:28.919 } 00:23:28.919 ] 00:23:28.919 }' 00:23:28.919 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.919 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.178 [2024-12-09 23:04:44.930253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:29.178 "name": "raid_bdev1", 00:23:29.178 "aliases": [ 00:23:29.178 "73658668-9c69-4510-953a-5b21cc669928" 00:23:29.178 ], 00:23:29.178 "product_name": "Raid Volume", 00:23:29.178 "block_size": 512, 00:23:29.178 "num_blocks": 190464, 00:23:29.178 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:29.178 "assigned_rate_limits": { 00:23:29.178 "rw_ios_per_sec": 0, 00:23:29.178 "rw_mbytes_per_sec": 0, 00:23:29.178 "r_mbytes_per_sec": 0, 00:23:29.178 "w_mbytes_per_sec": 0 00:23:29.178 }, 00:23:29.178 "claimed": false, 00:23:29.178 "zoned": false, 00:23:29.178 "supported_io_types": { 00:23:29.178 "read": true, 00:23:29.178 "write": true, 00:23:29.178 "unmap": false, 00:23:29.178 "flush": false, 00:23:29.178 "reset": true, 00:23:29.178 "nvme_admin": false, 00:23:29.178 "nvme_io": false, 00:23:29.178 "nvme_io_md": false, 00:23:29.178 "write_zeroes": true, 00:23:29.178 "zcopy": false, 00:23:29.178 "get_zone_info": false, 00:23:29.178 "zone_management": false, 00:23:29.178 "zone_append": false, 00:23:29.178 "compare": false, 00:23:29.178 "compare_and_write": false, 00:23:29.178 "abort": false, 00:23:29.178 "seek_hole": false, 00:23:29.178 "seek_data": false, 00:23:29.178 "copy": false, 00:23:29.178 "nvme_iov_md": false 00:23:29.178 }, 00:23:29.178 "driver_specific": { 00:23:29.178 "raid": { 00:23:29.178 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:29.178 "strip_size_kb": 64, 00:23:29.178 "state": "online", 00:23:29.178 "raid_level": "raid5f", 00:23:29.178 "superblock": true, 00:23:29.178 "num_base_bdevs": 4, 00:23:29.178 "num_base_bdevs_discovered": 4, 00:23:29.178 "num_base_bdevs_operational": 4, 00:23:29.178 "base_bdevs_list": [ 00:23:29.178 { 00:23:29.178 "name": "pt1", 00:23:29.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:29.178 "is_configured": true, 00:23:29.178 "data_offset": 2048, 00:23:29.178 "data_size": 63488 00:23:29.178 }, 00:23:29.178 { 00:23:29.178 "name": "pt2", 00:23:29.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.178 "is_configured": true, 00:23:29.178 "data_offset": 2048, 00:23:29.178 "data_size": 63488 00:23:29.178 }, 00:23:29.178 { 00:23:29.178 "name": "pt3", 00:23:29.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:29.178 "is_configured": true, 00:23:29.178 "data_offset": 2048, 00:23:29.178 "data_size": 63488 00:23:29.178 }, 00:23:29.178 { 00:23:29.178 "name": "pt4", 00:23:29.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:29.178 "is_configured": true, 00:23:29.178 "data_offset": 2048, 00:23:29.178 "data_size": 63488 00:23:29.178 } 00:23:29.178 ] 00:23:29.178 } 00:23:29.178 } 00:23:29.178 }' 00:23:29.178 23:04:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:29.178 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:29.178 pt2 00:23:29.178 pt3 00:23:29.178 pt4' 00:23:29.178 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.436 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:29.437 [2024-12-09 23:04:45.269658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.437 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73658668-9c69-4510-953a-5b21cc669928 '!=' 73658668-9c69-4510-953a-5b21cc669928 ']' 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.694 [2024-12-09 23:04:45.301491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.694 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.694 "name": "raid_bdev1", 00:23:29.694 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:29.694 "strip_size_kb": 64, 00:23:29.694 "state": "online", 00:23:29.694 "raid_level": "raid5f", 00:23:29.694 "superblock": true, 00:23:29.694 "num_base_bdevs": 4, 00:23:29.694 "num_base_bdevs_discovered": 3, 00:23:29.694 "num_base_bdevs_operational": 3, 00:23:29.694 "base_bdevs_list": [ 00:23:29.694 { 00:23:29.694 "name": null, 00:23:29.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.694 "is_configured": false, 00:23:29.694 "data_offset": 0, 00:23:29.694 "data_size": 63488 00:23:29.694 }, 00:23:29.694 { 00:23:29.694 "name": "pt2", 00:23:29.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.695 "is_configured": true, 00:23:29.695 "data_offset": 2048, 00:23:29.695 "data_size": 63488 00:23:29.695 }, 00:23:29.695 { 00:23:29.695 "name": "pt3", 00:23:29.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:29.695 "is_configured": true, 00:23:29.695 "data_offset": 2048, 00:23:29.695 "data_size": 63488 00:23:29.695 }, 00:23:29.695 { 00:23:29.695 "name": "pt4", 00:23:29.695 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:29.695 "is_configured": true, 00:23:29.695 "data_offset": 2048, 00:23:29.695 "data_size": 63488 00:23:29.695 } 00:23:29.695 ] 00:23:29.695 }' 00:23:29.695 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.695 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.954 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:29.954 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.954 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.954 [2024-12-09 23:04:45.804614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:29.954 [2024-12-09 23:04:45.804715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:29.954 [2024-12-09 23:04:45.804837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.954 [2024-12-09 23:04:45.804962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.954 [2024-12-09 23:04:45.805016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:29.954 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.214 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.215 [2024-12-09 23:04:45.904411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:30.215 [2024-12-09 23:04:45.904572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.215 [2024-12-09 23:04:45.904619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:30.215 [2024-12-09 23:04:45.904690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.215 [2024-12-09 23:04:45.907222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.215 [2024-12-09 23:04:45.907303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:30.215 [2024-12-09 23:04:45.907437] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:30.215 [2024-12-09 23:04:45.907543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:30.215 pt2 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.215 "name": "raid_bdev1", 00:23:30.215 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:30.215 "strip_size_kb": 64, 00:23:30.215 "state": "configuring", 00:23:30.215 "raid_level": "raid5f", 00:23:30.215 "superblock": true, 00:23:30.215 "num_base_bdevs": 4, 00:23:30.215 "num_base_bdevs_discovered": 1, 00:23:30.215 "num_base_bdevs_operational": 3, 00:23:30.215 "base_bdevs_list": [ 00:23:30.215 { 00:23:30.215 "name": null, 00:23:30.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.215 "is_configured": false, 00:23:30.215 "data_offset": 2048, 00:23:30.215 "data_size": 63488 00:23:30.215 }, 00:23:30.215 { 00:23:30.215 "name": "pt2", 00:23:30.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.215 "is_configured": true, 00:23:30.215 "data_offset": 2048, 00:23:30.215 "data_size": 63488 00:23:30.215 }, 00:23:30.215 { 00:23:30.215 "name": null, 00:23:30.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.215 "is_configured": false, 00:23:30.215 "data_offset": 2048, 00:23:30.215 "data_size": 63488 00:23:30.215 }, 00:23:30.215 { 00:23:30.215 "name": null, 00:23:30.215 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:30.215 "is_configured": false, 00:23:30.215 "data_offset": 2048, 00:23:30.215 "data_size": 63488 00:23:30.215 } 00:23:30.215 ] 00:23:30.215 }' 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.215 23:04:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.783 [2024-12-09 23:04:46.399589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:30.783 [2024-12-09 23:04:46.399728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.783 [2024-12-09 23:04:46.399763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:30.783 [2024-12-09 23:04:46.399774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.783 [2024-12-09 23:04:46.400240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.783 [2024-12-09 23:04:46.400268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:30.783 [2024-12-09 23:04:46.400364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:30.783 [2024-12-09 23:04:46.400388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:30.783 pt3 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.783 "name": "raid_bdev1", 00:23:30.783 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:30.783 "strip_size_kb": 64, 00:23:30.783 "state": "configuring", 00:23:30.783 "raid_level": "raid5f", 00:23:30.783 "superblock": true, 00:23:30.783 "num_base_bdevs": 4, 00:23:30.783 "num_base_bdevs_discovered": 2, 00:23:30.783 "num_base_bdevs_operational": 3, 00:23:30.783 "base_bdevs_list": [ 00:23:30.783 { 00:23:30.783 "name": null, 00:23:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.783 "is_configured": false, 00:23:30.783 "data_offset": 2048, 00:23:30.783 "data_size": 63488 00:23:30.783 }, 00:23:30.783 { 00:23:30.783 "name": "pt2", 00:23:30.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.783 "is_configured": true, 00:23:30.783 "data_offset": 2048, 00:23:30.783 "data_size": 63488 00:23:30.783 }, 00:23:30.783 { 00:23:30.783 "name": "pt3", 00:23:30.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.783 "is_configured": true, 00:23:30.783 "data_offset": 2048, 00:23:30.783 "data_size": 63488 00:23:30.783 }, 00:23:30.783 { 00:23:30.783 "name": null, 00:23:30.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:30.783 "is_configured": false, 00:23:30.783 "data_offset": 2048, 00:23:30.783 "data_size": 63488 00:23:30.783 } 00:23:30.783 ] 00:23:30.783 }' 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.783 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.043 [2024-12-09 23:04:46.842816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:31.043 [2024-12-09 23:04:46.842940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.043 [2024-12-09 23:04:46.842985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:31.043 [2024-12-09 23:04:46.843014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.043 [2024-12-09 23:04:46.843479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.043 [2024-12-09 23:04:46.843535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:31.043 [2024-12-09 23:04:46.843646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:31.043 [2024-12-09 23:04:46.843704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:31.043 [2024-12-09 23:04:46.843870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:31.043 [2024-12-09 23:04:46.843907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:31.043 [2024-12-09 23:04:46.844168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:31.043 [2024-12-09 23:04:46.851507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:31.043 [2024-12-09 23:04:46.851569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:31.043 [2024-12-09 23:04:46.851905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.043 pt4 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.043 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.044 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.044 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.044 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.303 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.303 "name": "raid_bdev1", 00:23:31.303 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:31.303 "strip_size_kb": 64, 00:23:31.303 "state": "online", 00:23:31.303 "raid_level": "raid5f", 00:23:31.303 "superblock": true, 00:23:31.303 "num_base_bdevs": 4, 00:23:31.303 "num_base_bdevs_discovered": 3, 00:23:31.303 "num_base_bdevs_operational": 3, 00:23:31.303 "base_bdevs_list": [ 00:23:31.303 { 00:23:31.303 "name": null, 00:23:31.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.303 "is_configured": false, 00:23:31.303 "data_offset": 2048, 00:23:31.303 "data_size": 63488 00:23:31.303 }, 00:23:31.303 { 00:23:31.303 "name": "pt2", 00:23:31.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.303 "is_configured": true, 00:23:31.303 "data_offset": 2048, 00:23:31.303 "data_size": 63488 00:23:31.303 }, 00:23:31.303 { 00:23:31.303 "name": "pt3", 00:23:31.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:31.303 "is_configured": true, 00:23:31.303 "data_offset": 2048, 00:23:31.303 "data_size": 63488 00:23:31.303 }, 00:23:31.303 { 00:23:31.303 "name": "pt4", 00:23:31.303 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:31.303 "is_configured": true, 00:23:31.303 "data_offset": 2048, 00:23:31.303 "data_size": 63488 00:23:31.303 } 00:23:31.303 ] 00:23:31.303 }' 00:23:31.303 23:04:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.303 23:04:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 [2024-12-09 23:04:47.337180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.564 [2024-12-09 23:04:47.337258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.564 [2024-12-09 23:04:47.337375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.564 [2024-12-09 23:04:47.337495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.564 [2024-12-09 23:04:47.337552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.564 [2024-12-09 23:04:47.401065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:31.564 [2024-12-09 23:04:47.401139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.564 [2024-12-09 23:04:47.401171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:31.564 [2024-12-09 23:04:47.401186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.564 [2024-12-09 23:04:47.403603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.564 [2024-12-09 23:04:47.403641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:31.564 [2024-12-09 23:04:47.403733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:31.564 [2024-12-09 23:04:47.403790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:31.564 [2024-12-09 23:04:47.403919] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:31.564 [2024-12-09 23:04:47.403935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.564 [2024-12-09 23:04:47.403950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:31.564 [2024-12-09 23:04:47.404021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:31.564 [2024-12-09 23:04:47.404112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:31.564 pt1 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.564 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.824 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.824 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.824 "name": "raid_bdev1", 00:23:31.824 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:31.824 "strip_size_kb": 64, 00:23:31.824 "state": "configuring", 00:23:31.824 "raid_level": "raid5f", 00:23:31.824 "superblock": true, 00:23:31.824 "num_base_bdevs": 4, 00:23:31.824 "num_base_bdevs_discovered": 2, 00:23:31.824 "num_base_bdevs_operational": 3, 00:23:31.824 "base_bdevs_list": [ 00:23:31.824 { 00:23:31.824 "name": null, 00:23:31.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.824 "is_configured": false, 00:23:31.824 "data_offset": 2048, 00:23:31.824 "data_size": 63488 00:23:31.824 }, 00:23:31.824 { 00:23:31.824 "name": "pt2", 00:23:31.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.824 "is_configured": true, 00:23:31.824 "data_offset": 2048, 00:23:31.824 "data_size": 63488 00:23:31.824 }, 00:23:31.824 { 00:23:31.824 "name": "pt3", 00:23:31.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:31.824 "is_configured": true, 00:23:31.824 "data_offset": 2048, 00:23:31.824 "data_size": 63488 00:23:31.824 }, 00:23:31.824 { 00:23:31.824 "name": null, 00:23:31.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:31.824 "is_configured": false, 00:23:31.824 "data_offset": 2048, 00:23:31.824 "data_size": 63488 00:23:31.824 } 00:23:31.824 ] 00:23:31.824 }' 00:23:31.824 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.824 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.084 [2024-12-09 23:04:47.888303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:32.084 [2024-12-09 23:04:47.888374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.084 [2024-12-09 23:04:47.888400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:32.084 [2024-12-09 23:04:47.888411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.084 [2024-12-09 23:04:47.888980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.084 [2024-12-09 23:04:47.889071] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:32.084 [2024-12-09 23:04:47.889182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:32.084 [2024-12-09 23:04:47.889211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:32.084 [2024-12-09 23:04:47.889379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:32.084 [2024-12-09 23:04:47.889390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:32.084 [2024-12-09 23:04:47.889712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:32.084 [2024-12-09 23:04:47.897970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:32.084 [2024-12-09 23:04:47.898033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:32.084 [2024-12-09 23:04:47.898381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.084 pt4 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.084 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.342 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.342 "name": "raid_bdev1", 00:23:32.342 "uuid": "73658668-9c69-4510-953a-5b21cc669928", 00:23:32.342 "strip_size_kb": 64, 00:23:32.342 "state": "online", 00:23:32.342 "raid_level": "raid5f", 00:23:32.342 "superblock": true, 00:23:32.342 "num_base_bdevs": 4, 00:23:32.342 "num_base_bdevs_discovered": 3, 00:23:32.342 "num_base_bdevs_operational": 3, 00:23:32.342 "base_bdevs_list": [ 00:23:32.342 { 00:23:32.342 "name": null, 00:23:32.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.342 "is_configured": false, 00:23:32.342 "data_offset": 2048, 00:23:32.342 "data_size": 63488 00:23:32.342 }, 00:23:32.342 { 00:23:32.342 "name": "pt2", 00:23:32.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:32.343 "is_configured": true, 00:23:32.343 "data_offset": 2048, 00:23:32.343 "data_size": 63488 00:23:32.343 }, 00:23:32.343 { 00:23:32.343 "name": "pt3", 00:23:32.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:32.343 "is_configured": true, 00:23:32.343 "data_offset": 2048, 00:23:32.343 "data_size": 63488 00:23:32.343 }, 00:23:32.343 { 00:23:32.343 "name": "pt4", 00:23:32.343 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:32.343 "is_configured": true, 00:23:32.343 "data_offset": 2048, 00:23:32.343 "data_size": 63488 00:23:32.343 } 00:23:32.343 ] 00:23:32.343 }' 00:23:32.343 23:04:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.343 23:04:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.601 [2024-12-09 23:04:48.339715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 73658668-9c69-4510-953a-5b21cc669928 '!=' 73658668-9c69-4510-953a-5b21cc669928 ']' 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84815 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84815 ']' 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84815 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:32.601 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.602 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84815 00:23:32.602 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.602 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.602 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84815' 00:23:32.602 killing process with pid 84815 00:23:32.602 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84815 00:23:32.602 [2024-12-09 23:04:48.405682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.602 23:04:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84815 00:23:32.602 [2024-12-09 23:04:48.405799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.602 [2024-12-09 23:04:48.405889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.602 [2024-12-09 23:04:48.405907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:33.167 [2024-12-09 23:04:48.856871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:34.542 ************************************ 00:23:34.542 END TEST raid5f_superblock_test 00:23:34.542 ************************************ 00:23:34.542 23:04:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:34.542 00:23:34.542 real 0m8.887s 00:23:34.542 user 0m13.915s 00:23:34.542 sys 0m1.602s 00:23:34.542 23:04:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.542 23:04:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.542 23:04:50 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:23:34.542 23:04:50 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:34.542 23:04:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:34.542 23:04:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.542 23:04:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:34.542 ************************************ 00:23:34.542 START TEST raid5f_rebuild_test 00:23:34.542 ************************************ 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85303 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85303 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85303 ']' 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.542 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.543 23:04:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:34.543 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.543 23:04:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.543 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:34.543 Zero copy mechanism will not be used. 00:23:34.543 [2024-12-09 23:04:50.248157] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:23:34.543 [2024-12-09 23:04:50.248291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85303 ] 00:23:34.801 [2024-12-09 23:04:50.427674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.801 [2024-12-09 23:04:50.551316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.060 [2024-12-09 23:04:50.752029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.060 [2024-12-09 23:04:50.752081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 BaseBdev1_malloc 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 [2024-12-09 23:04:51.143102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:35.322 [2024-12-09 23:04:51.143216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.322 [2024-12-09 23:04:51.143263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:35.322 [2024-12-09 23:04:51.143309] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.322 [2024-12-09 23:04:51.145455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.322 [2024-12-09 23:04:51.145549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:35.322 BaseBdev1 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.322 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 BaseBdev2_malloc 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 [2024-12-09 23:04:51.200371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:35.582 [2024-12-09 23:04:51.200506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.582 [2024-12-09 23:04:51.200580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:35.582 [2024-12-09 23:04:51.200637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.582 [2024-12-09 23:04:51.203131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.582 [2024-12-09 23:04:51.203246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:35.582 BaseBdev2 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 BaseBdev3_malloc 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 [2024-12-09 23:04:51.269534] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:35.582 [2024-12-09 23:04:51.269601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.582 [2024-12-09 23:04:51.269628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:35.582 [2024-12-09 23:04:51.269641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.582 [2024-12-09 23:04:51.272122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.582 [2024-12-09 23:04:51.272171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:35.582 BaseBdev3 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.582 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 BaseBdev4_malloc 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 [2024-12-09 23:04:51.324703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:35.583 [2024-12-09 23:04:51.324773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.583 [2024-12-09 23:04:51.324798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:35.583 [2024-12-09 23:04:51.324810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.583 [2024-12-09 23:04:51.327094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.583 [2024-12-09 23:04:51.327140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:35.583 BaseBdev4 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 spare_malloc 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 spare_delay 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 [2024-12-09 23:04:51.387708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:35.583 [2024-12-09 23:04:51.387769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.583 [2024-12-09 23:04:51.387791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:35.583 [2024-12-09 23:04:51.387803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.583 [2024-12-09 23:04:51.390165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.583 [2024-12-09 23:04:51.390212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:35.583 spare 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 [2024-12-09 23:04:51.399767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.583 [2024-12-09 23:04:51.401844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.583 [2024-12-09 23:04:51.401976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:35.583 [2024-12-09 23:04:51.402098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:35.583 [2024-12-09 23:04:51.402260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:35.583 [2024-12-09 23:04:51.402314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:35.583 [2024-12-09 23:04:51.402668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:35.583 [2024-12-09 23:04:51.411442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:35.583 [2024-12-09 23:04:51.411524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:35.583 [2024-12-09 23:04:51.411817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.843 "name": "raid_bdev1", 00:23:35.843 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:35.843 "strip_size_kb": 64, 00:23:35.843 "state": "online", 00:23:35.843 "raid_level": "raid5f", 00:23:35.843 "superblock": false, 00:23:35.843 "num_base_bdevs": 4, 00:23:35.843 "num_base_bdevs_discovered": 4, 00:23:35.843 "num_base_bdevs_operational": 4, 00:23:35.843 "base_bdevs_list": [ 00:23:35.843 { 00:23:35.843 "name": "BaseBdev1", 00:23:35.843 "uuid": "837c53cc-2f98-5bba-b05f-1e550215e201", 00:23:35.843 "is_configured": true, 00:23:35.843 "data_offset": 0, 00:23:35.843 "data_size": 65536 00:23:35.843 }, 00:23:35.843 { 00:23:35.843 "name": "BaseBdev2", 00:23:35.843 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:35.843 "is_configured": true, 00:23:35.843 "data_offset": 0, 00:23:35.843 "data_size": 65536 00:23:35.843 }, 00:23:35.843 { 00:23:35.843 "name": "BaseBdev3", 00:23:35.843 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:35.843 "is_configured": true, 00:23:35.843 "data_offset": 0, 00:23:35.843 "data_size": 65536 00:23:35.843 }, 00:23:35.843 { 00:23:35.843 "name": "BaseBdev4", 00:23:35.843 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:35.843 "is_configured": true, 00:23:35.843 "data_offset": 0, 00:23:35.843 "data_size": 65536 00:23:35.843 } 00:23:35.843 ] 00:23:35.843 }' 00:23:35.843 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.843 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.102 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:36.102 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:36.102 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.103 [2024-12-09 23:04:51.901038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:36.103 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.362 23:04:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:36.362 [2024-12-09 23:04:52.200740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:36.622 /dev/nbd0 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.622 1+0 records in 00:23:36.622 1+0 records out 00:23:36.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050045 s, 8.2 MB/s 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:36.622 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:37.193 512+0 records in 00:23:37.193 512+0 records out 00:23:37.193 100663296 bytes (101 MB, 96 MiB) copied, 0.543364 s, 185 MB/s 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.193 23:04:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:37.453 [2024-12-09 23:04:53.066436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.453 [2024-12-09 23:04:53.082104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:37.453 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.454 "name": "raid_bdev1", 00:23:37.454 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:37.454 "strip_size_kb": 64, 00:23:37.454 "state": "online", 00:23:37.454 "raid_level": "raid5f", 00:23:37.454 "superblock": false, 00:23:37.454 "num_base_bdevs": 4, 00:23:37.454 "num_base_bdevs_discovered": 3, 00:23:37.454 "num_base_bdevs_operational": 3, 00:23:37.454 "base_bdevs_list": [ 00:23:37.454 { 00:23:37.454 "name": null, 00:23:37.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.454 "is_configured": false, 00:23:37.454 "data_offset": 0, 00:23:37.454 "data_size": 65536 00:23:37.454 }, 00:23:37.454 { 00:23:37.454 "name": "BaseBdev2", 00:23:37.454 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:37.454 "is_configured": true, 00:23:37.454 "data_offset": 0, 00:23:37.454 "data_size": 65536 00:23:37.454 }, 00:23:37.454 { 00:23:37.454 "name": "BaseBdev3", 00:23:37.454 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:37.454 "is_configured": true, 00:23:37.454 "data_offset": 0, 00:23:37.454 "data_size": 65536 00:23:37.454 }, 00:23:37.454 { 00:23:37.454 "name": "BaseBdev4", 00:23:37.454 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:37.454 "is_configured": true, 00:23:37.454 "data_offset": 0, 00:23:37.454 "data_size": 65536 00:23:37.454 } 00:23:37.454 ] 00:23:37.454 }' 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.454 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.037 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:38.037 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.037 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.037 [2024-12-09 23:04:53.597335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.037 [2024-12-09 23:04:53.615723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:23:38.037 23:04:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.037 23:04:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:38.037 [2024-12-09 23:04:53.627631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.974 "name": "raid_bdev1", 00:23:38.974 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:38.974 "strip_size_kb": 64, 00:23:38.974 "state": "online", 00:23:38.974 "raid_level": "raid5f", 00:23:38.974 "superblock": false, 00:23:38.974 "num_base_bdevs": 4, 00:23:38.974 "num_base_bdevs_discovered": 4, 00:23:38.974 "num_base_bdevs_operational": 4, 00:23:38.974 "process": { 00:23:38.974 "type": "rebuild", 00:23:38.974 "target": "spare", 00:23:38.974 "progress": { 00:23:38.974 "blocks": 17280, 00:23:38.974 "percent": 8 00:23:38.974 } 00:23:38.974 }, 00:23:38.974 "base_bdevs_list": [ 00:23:38.974 { 00:23:38.974 "name": "spare", 00:23:38.974 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:38.974 "is_configured": true, 00:23:38.974 "data_offset": 0, 00:23:38.974 "data_size": 65536 00:23:38.974 }, 00:23:38.974 { 00:23:38.974 "name": "BaseBdev2", 00:23:38.974 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:38.974 "is_configured": true, 00:23:38.974 "data_offset": 0, 00:23:38.974 "data_size": 65536 00:23:38.974 }, 00:23:38.974 { 00:23:38.974 "name": "BaseBdev3", 00:23:38.974 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:38.974 "is_configured": true, 00:23:38.974 "data_offset": 0, 00:23:38.974 "data_size": 65536 00:23:38.974 }, 00:23:38.974 { 00:23:38.974 "name": "BaseBdev4", 00:23:38.974 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:38.974 "is_configured": true, 00:23:38.974 "data_offset": 0, 00:23:38.974 "data_size": 65536 00:23:38.974 } 00:23:38.974 ] 00:23:38.974 }' 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.974 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.974 [2024-12-09 23:04:54.751473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.234 [2024-12-09 23:04:54.838270] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:39.234 [2024-12-09 23:04:54.838385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.234 [2024-12-09 23:04:54.838411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.234 [2024-12-09 23:04:54.838428] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.234 "name": "raid_bdev1", 00:23:39.234 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:39.234 "strip_size_kb": 64, 00:23:39.234 "state": "online", 00:23:39.234 "raid_level": "raid5f", 00:23:39.234 "superblock": false, 00:23:39.234 "num_base_bdevs": 4, 00:23:39.234 "num_base_bdevs_discovered": 3, 00:23:39.234 "num_base_bdevs_operational": 3, 00:23:39.234 "base_bdevs_list": [ 00:23:39.234 { 00:23:39.234 "name": null, 00:23:39.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.234 "is_configured": false, 00:23:39.234 "data_offset": 0, 00:23:39.234 "data_size": 65536 00:23:39.234 }, 00:23:39.234 { 00:23:39.234 "name": "BaseBdev2", 00:23:39.234 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:39.234 "is_configured": true, 00:23:39.234 "data_offset": 0, 00:23:39.234 "data_size": 65536 00:23:39.234 }, 00:23:39.234 { 00:23:39.234 "name": "BaseBdev3", 00:23:39.234 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:39.234 "is_configured": true, 00:23:39.234 "data_offset": 0, 00:23:39.234 "data_size": 65536 00:23:39.234 }, 00:23:39.234 { 00:23:39.234 "name": "BaseBdev4", 00:23:39.234 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:39.234 "is_configured": true, 00:23:39.234 "data_offset": 0, 00:23:39.234 "data_size": 65536 00:23:39.234 } 00:23:39.234 ] 00:23:39.234 }' 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.234 23:04:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.809 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.809 "name": "raid_bdev1", 00:23:39.809 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:39.809 "strip_size_kb": 64, 00:23:39.809 "state": "online", 00:23:39.809 "raid_level": "raid5f", 00:23:39.809 "superblock": false, 00:23:39.809 "num_base_bdevs": 4, 00:23:39.809 "num_base_bdevs_discovered": 3, 00:23:39.809 "num_base_bdevs_operational": 3, 00:23:39.809 "base_bdevs_list": [ 00:23:39.809 { 00:23:39.809 "name": null, 00:23:39.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.809 "is_configured": false, 00:23:39.809 "data_offset": 0, 00:23:39.809 "data_size": 65536 00:23:39.809 }, 00:23:39.809 { 00:23:39.809 "name": "BaseBdev2", 00:23:39.809 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:39.809 "is_configured": true, 00:23:39.809 "data_offset": 0, 00:23:39.809 "data_size": 65536 00:23:39.809 }, 00:23:39.810 { 00:23:39.810 "name": "BaseBdev3", 00:23:39.810 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:39.810 "is_configured": true, 00:23:39.810 "data_offset": 0, 00:23:39.810 "data_size": 65536 00:23:39.810 }, 00:23:39.810 { 00:23:39.810 "name": "BaseBdev4", 00:23:39.810 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:39.810 "is_configured": true, 00:23:39.810 "data_offset": 0, 00:23:39.810 "data_size": 65536 00:23:39.810 } 00:23:39.810 ] 00:23:39.810 }' 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.810 [2024-12-09 23:04:55.525629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.810 [2024-12-09 23:04:55.542315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.810 23:04:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:39.810 [2024-12-09 23:04:55.552270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.749 "name": "raid_bdev1", 00:23:40.749 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:40.749 "strip_size_kb": 64, 00:23:40.749 "state": "online", 00:23:40.749 "raid_level": "raid5f", 00:23:40.749 "superblock": false, 00:23:40.749 "num_base_bdevs": 4, 00:23:40.749 "num_base_bdevs_discovered": 4, 00:23:40.749 "num_base_bdevs_operational": 4, 00:23:40.749 "process": { 00:23:40.749 "type": "rebuild", 00:23:40.749 "target": "spare", 00:23:40.749 "progress": { 00:23:40.749 "blocks": 17280, 00:23:40.749 "percent": 8 00:23:40.749 } 00:23:40.749 }, 00:23:40.749 "base_bdevs_list": [ 00:23:40.749 { 00:23:40.749 "name": "spare", 00:23:40.749 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:40.749 "is_configured": true, 00:23:40.749 "data_offset": 0, 00:23:40.749 "data_size": 65536 00:23:40.749 }, 00:23:40.749 { 00:23:40.749 "name": "BaseBdev2", 00:23:40.749 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:40.749 "is_configured": true, 00:23:40.749 "data_offset": 0, 00:23:40.749 "data_size": 65536 00:23:40.749 }, 00:23:40.749 { 00:23:40.749 "name": "BaseBdev3", 00:23:40.749 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:40.749 "is_configured": true, 00:23:40.749 "data_offset": 0, 00:23:40.749 "data_size": 65536 00:23:40.749 }, 00:23:40.749 { 00:23:40.749 "name": "BaseBdev4", 00:23:40.749 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:40.749 "is_configured": true, 00:23:40.749 "data_offset": 0, 00:23:40.749 "data_size": 65536 00:23:40.749 } 00:23:40.749 ] 00:23:40.749 }' 00:23:40.749 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=654 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.008 "name": "raid_bdev1", 00:23:41.008 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:41.008 "strip_size_kb": 64, 00:23:41.008 "state": "online", 00:23:41.008 "raid_level": "raid5f", 00:23:41.008 "superblock": false, 00:23:41.008 "num_base_bdevs": 4, 00:23:41.008 "num_base_bdevs_discovered": 4, 00:23:41.008 "num_base_bdevs_operational": 4, 00:23:41.008 "process": { 00:23:41.008 "type": "rebuild", 00:23:41.008 "target": "spare", 00:23:41.008 "progress": { 00:23:41.008 "blocks": 21120, 00:23:41.008 "percent": 10 00:23:41.008 } 00:23:41.008 }, 00:23:41.008 "base_bdevs_list": [ 00:23:41.008 { 00:23:41.008 "name": "spare", 00:23:41.008 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:41.008 "is_configured": true, 00:23:41.008 "data_offset": 0, 00:23:41.008 "data_size": 65536 00:23:41.008 }, 00:23:41.008 { 00:23:41.008 "name": "BaseBdev2", 00:23:41.008 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:41.008 "is_configured": true, 00:23:41.008 "data_offset": 0, 00:23:41.008 "data_size": 65536 00:23:41.008 }, 00:23:41.008 { 00:23:41.008 "name": "BaseBdev3", 00:23:41.008 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:41.008 "is_configured": true, 00:23:41.008 "data_offset": 0, 00:23:41.008 "data_size": 65536 00:23:41.008 }, 00:23:41.008 { 00:23:41.008 "name": "BaseBdev4", 00:23:41.008 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:41.008 "is_configured": true, 00:23:41.008 "data_offset": 0, 00:23:41.008 "data_size": 65536 00:23:41.008 } 00:23:41.008 ] 00:23:41.008 }' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.008 23:04:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.400 "name": "raid_bdev1", 00:23:42.400 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:42.400 "strip_size_kb": 64, 00:23:42.400 "state": "online", 00:23:42.400 "raid_level": "raid5f", 00:23:42.400 "superblock": false, 00:23:42.400 "num_base_bdevs": 4, 00:23:42.400 "num_base_bdevs_discovered": 4, 00:23:42.400 "num_base_bdevs_operational": 4, 00:23:42.400 "process": { 00:23:42.400 "type": "rebuild", 00:23:42.400 "target": "spare", 00:23:42.400 "progress": { 00:23:42.400 "blocks": 42240, 00:23:42.400 "percent": 21 00:23:42.400 } 00:23:42.400 }, 00:23:42.400 "base_bdevs_list": [ 00:23:42.400 { 00:23:42.400 "name": "spare", 00:23:42.400 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:42.400 "is_configured": true, 00:23:42.400 "data_offset": 0, 00:23:42.400 "data_size": 65536 00:23:42.400 }, 00:23:42.400 { 00:23:42.400 "name": "BaseBdev2", 00:23:42.400 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:42.400 "is_configured": true, 00:23:42.400 "data_offset": 0, 00:23:42.400 "data_size": 65536 00:23:42.400 }, 00:23:42.400 { 00:23:42.400 "name": "BaseBdev3", 00:23:42.400 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:42.400 "is_configured": true, 00:23:42.400 "data_offset": 0, 00:23:42.400 "data_size": 65536 00:23:42.400 }, 00:23:42.400 { 00:23:42.400 "name": "BaseBdev4", 00:23:42.400 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:42.400 "is_configured": true, 00:23:42.400 "data_offset": 0, 00:23:42.400 "data_size": 65536 00:23:42.400 } 00:23:42.400 ] 00:23:42.400 }' 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.400 23:04:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.336 23:04:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.336 "name": "raid_bdev1", 00:23:43.336 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:43.336 "strip_size_kb": 64, 00:23:43.336 "state": "online", 00:23:43.336 "raid_level": "raid5f", 00:23:43.336 "superblock": false, 00:23:43.336 "num_base_bdevs": 4, 00:23:43.336 "num_base_bdevs_discovered": 4, 00:23:43.336 "num_base_bdevs_operational": 4, 00:23:43.336 "process": { 00:23:43.336 "type": "rebuild", 00:23:43.336 "target": "spare", 00:23:43.336 "progress": { 00:23:43.336 "blocks": 65280, 00:23:43.336 "percent": 33 00:23:43.336 } 00:23:43.336 }, 00:23:43.336 "base_bdevs_list": [ 00:23:43.336 { 00:23:43.336 "name": "spare", 00:23:43.336 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:43.336 "is_configured": true, 00:23:43.336 "data_offset": 0, 00:23:43.336 "data_size": 65536 00:23:43.336 }, 00:23:43.336 { 00:23:43.336 "name": "BaseBdev2", 00:23:43.336 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:43.336 "is_configured": true, 00:23:43.336 "data_offset": 0, 00:23:43.336 "data_size": 65536 00:23:43.336 }, 00:23:43.336 { 00:23:43.336 "name": "BaseBdev3", 00:23:43.336 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:43.336 "is_configured": true, 00:23:43.336 "data_offset": 0, 00:23:43.336 "data_size": 65536 00:23:43.336 }, 00:23:43.336 { 00:23:43.336 "name": "BaseBdev4", 00:23:43.336 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:43.336 "is_configured": true, 00:23:43.336 "data_offset": 0, 00:23:43.336 "data_size": 65536 00:23:43.336 } 00:23:43.336 ] 00:23:43.336 }' 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.336 23:04:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.712 "name": "raid_bdev1", 00:23:44.712 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:44.712 "strip_size_kb": 64, 00:23:44.712 "state": "online", 00:23:44.712 "raid_level": "raid5f", 00:23:44.712 "superblock": false, 00:23:44.712 "num_base_bdevs": 4, 00:23:44.712 "num_base_bdevs_discovered": 4, 00:23:44.712 "num_base_bdevs_operational": 4, 00:23:44.712 "process": { 00:23:44.712 "type": "rebuild", 00:23:44.712 "target": "spare", 00:23:44.712 "progress": { 00:23:44.712 "blocks": 86400, 00:23:44.712 "percent": 43 00:23:44.712 } 00:23:44.712 }, 00:23:44.712 "base_bdevs_list": [ 00:23:44.712 { 00:23:44.712 "name": "spare", 00:23:44.712 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:44.712 "is_configured": true, 00:23:44.712 "data_offset": 0, 00:23:44.712 "data_size": 65536 00:23:44.712 }, 00:23:44.712 { 00:23:44.712 "name": "BaseBdev2", 00:23:44.712 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:44.712 "is_configured": true, 00:23:44.712 "data_offset": 0, 00:23:44.712 "data_size": 65536 00:23:44.712 }, 00:23:44.712 { 00:23:44.712 "name": "BaseBdev3", 00:23:44.712 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:44.712 "is_configured": true, 00:23:44.712 "data_offset": 0, 00:23:44.712 "data_size": 65536 00:23:44.712 }, 00:23:44.712 { 00:23:44.712 "name": "BaseBdev4", 00:23:44.712 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:44.712 "is_configured": true, 00:23:44.712 "data_offset": 0, 00:23:44.712 "data_size": 65536 00:23:44.712 } 00:23:44.712 ] 00:23:44.712 }' 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.712 23:05:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:45.650 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.651 "name": "raid_bdev1", 00:23:45.651 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:45.651 "strip_size_kb": 64, 00:23:45.651 "state": "online", 00:23:45.651 "raid_level": "raid5f", 00:23:45.651 "superblock": false, 00:23:45.651 "num_base_bdevs": 4, 00:23:45.651 "num_base_bdevs_discovered": 4, 00:23:45.651 "num_base_bdevs_operational": 4, 00:23:45.651 "process": { 00:23:45.651 "type": "rebuild", 00:23:45.651 "target": "spare", 00:23:45.651 "progress": { 00:23:45.651 "blocks": 107520, 00:23:45.651 "percent": 54 00:23:45.651 } 00:23:45.651 }, 00:23:45.651 "base_bdevs_list": [ 00:23:45.651 { 00:23:45.651 "name": "spare", 00:23:45.651 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:45.651 "is_configured": true, 00:23:45.651 "data_offset": 0, 00:23:45.651 "data_size": 65536 00:23:45.651 }, 00:23:45.651 { 00:23:45.651 "name": "BaseBdev2", 00:23:45.651 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:45.651 "is_configured": true, 00:23:45.651 "data_offset": 0, 00:23:45.651 "data_size": 65536 00:23:45.651 }, 00:23:45.651 { 00:23:45.651 "name": "BaseBdev3", 00:23:45.651 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:45.651 "is_configured": true, 00:23:45.651 "data_offset": 0, 00:23:45.651 "data_size": 65536 00:23:45.651 }, 00:23:45.651 { 00:23:45.651 "name": "BaseBdev4", 00:23:45.651 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:45.651 "is_configured": true, 00:23:45.651 "data_offset": 0, 00:23:45.651 "data_size": 65536 00:23:45.651 } 00:23:45.651 ] 00:23:45.651 }' 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.651 23:05:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.031 "name": "raid_bdev1", 00:23:47.031 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:47.031 "strip_size_kb": 64, 00:23:47.031 "state": "online", 00:23:47.031 "raid_level": "raid5f", 00:23:47.031 "superblock": false, 00:23:47.031 "num_base_bdevs": 4, 00:23:47.031 "num_base_bdevs_discovered": 4, 00:23:47.031 "num_base_bdevs_operational": 4, 00:23:47.031 "process": { 00:23:47.031 "type": "rebuild", 00:23:47.031 "target": "spare", 00:23:47.031 "progress": { 00:23:47.031 "blocks": 130560, 00:23:47.031 "percent": 66 00:23:47.031 } 00:23:47.031 }, 00:23:47.031 "base_bdevs_list": [ 00:23:47.031 { 00:23:47.031 "name": "spare", 00:23:47.031 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:47.031 "is_configured": true, 00:23:47.031 "data_offset": 0, 00:23:47.031 "data_size": 65536 00:23:47.031 }, 00:23:47.031 { 00:23:47.031 "name": "BaseBdev2", 00:23:47.031 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:47.031 "is_configured": true, 00:23:47.031 "data_offset": 0, 00:23:47.031 "data_size": 65536 00:23:47.031 }, 00:23:47.031 { 00:23:47.031 "name": "BaseBdev3", 00:23:47.031 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:47.031 "is_configured": true, 00:23:47.031 "data_offset": 0, 00:23:47.031 "data_size": 65536 00:23:47.031 }, 00:23:47.031 { 00:23:47.031 "name": "BaseBdev4", 00:23:47.031 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:47.031 "is_configured": true, 00:23:47.031 "data_offset": 0, 00:23:47.031 "data_size": 65536 00:23:47.031 } 00:23:47.031 ] 00:23:47.031 }' 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.031 23:05:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.970 "name": "raid_bdev1", 00:23:47.970 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:47.970 "strip_size_kb": 64, 00:23:47.970 "state": "online", 00:23:47.970 "raid_level": "raid5f", 00:23:47.970 "superblock": false, 00:23:47.970 "num_base_bdevs": 4, 00:23:47.970 "num_base_bdevs_discovered": 4, 00:23:47.970 "num_base_bdevs_operational": 4, 00:23:47.970 "process": { 00:23:47.970 "type": "rebuild", 00:23:47.970 "target": "spare", 00:23:47.970 "progress": { 00:23:47.970 "blocks": 151680, 00:23:47.970 "percent": 77 00:23:47.970 } 00:23:47.970 }, 00:23:47.970 "base_bdevs_list": [ 00:23:47.970 { 00:23:47.970 "name": "spare", 00:23:47.970 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 0, 00:23:47.970 "data_size": 65536 00:23:47.970 }, 00:23:47.970 { 00:23:47.970 "name": "BaseBdev2", 00:23:47.970 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 0, 00:23:47.970 "data_size": 65536 00:23:47.970 }, 00:23:47.970 { 00:23:47.970 "name": "BaseBdev3", 00:23:47.970 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 0, 00:23:47.970 "data_size": 65536 00:23:47.970 }, 00:23:47.970 { 00:23:47.970 "name": "BaseBdev4", 00:23:47.970 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:47.970 "is_configured": true, 00:23:47.970 "data_offset": 0, 00:23:47.970 "data_size": 65536 00:23:47.970 } 00:23:47.970 ] 00:23:47.970 }' 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.970 23:05:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.998 "name": "raid_bdev1", 00:23:48.998 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:48.998 "strip_size_kb": 64, 00:23:48.998 "state": "online", 00:23:48.998 "raid_level": "raid5f", 00:23:48.998 "superblock": false, 00:23:48.998 "num_base_bdevs": 4, 00:23:48.998 "num_base_bdevs_discovered": 4, 00:23:48.998 "num_base_bdevs_operational": 4, 00:23:48.998 "process": { 00:23:48.998 "type": "rebuild", 00:23:48.998 "target": "spare", 00:23:48.998 "progress": { 00:23:48.998 "blocks": 174720, 00:23:48.998 "percent": 88 00:23:48.998 } 00:23:48.998 }, 00:23:48.998 "base_bdevs_list": [ 00:23:48.998 { 00:23:48.998 "name": "spare", 00:23:48.998 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:48.998 "is_configured": true, 00:23:48.998 "data_offset": 0, 00:23:48.998 "data_size": 65536 00:23:48.998 }, 00:23:48.998 { 00:23:48.998 "name": "BaseBdev2", 00:23:48.998 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:48.998 "is_configured": true, 00:23:48.998 "data_offset": 0, 00:23:48.998 "data_size": 65536 00:23:48.998 }, 00:23:48.998 { 00:23:48.998 "name": "BaseBdev3", 00:23:48.998 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:48.998 "is_configured": true, 00:23:48.998 "data_offset": 0, 00:23:48.998 "data_size": 65536 00:23:48.998 }, 00:23:48.998 { 00:23:48.998 "name": "BaseBdev4", 00:23:48.998 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:48.998 "is_configured": true, 00:23:48.998 "data_offset": 0, 00:23:48.998 "data_size": 65536 00:23:48.998 } 00:23:48.998 ] 00:23:48.998 }' 00:23:48.998 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.286 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.286 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.286 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.286 23:05:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.221 23:05:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.222 [2024-12-09 23:05:05.948959] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:50.222 [2024-12-09 23:05:05.949060] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:50.222 [2024-12-09 23:05:05.949124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.222 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.222 "name": "raid_bdev1", 00:23:50.222 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:50.222 "strip_size_kb": 64, 00:23:50.222 "state": "online", 00:23:50.222 "raid_level": "raid5f", 00:23:50.222 "superblock": false, 00:23:50.222 "num_base_bdevs": 4, 00:23:50.222 "num_base_bdevs_discovered": 4, 00:23:50.222 "num_base_bdevs_operational": 4, 00:23:50.222 "process": { 00:23:50.222 "type": "rebuild", 00:23:50.222 "target": "spare", 00:23:50.222 "progress": { 00:23:50.222 "blocks": 195840, 00:23:50.222 "percent": 99 00:23:50.222 } 00:23:50.222 }, 00:23:50.222 "base_bdevs_list": [ 00:23:50.222 { 00:23:50.222 "name": "spare", 00:23:50.222 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:50.222 "is_configured": true, 00:23:50.222 "data_offset": 0, 00:23:50.222 "data_size": 65536 00:23:50.222 }, 00:23:50.222 { 00:23:50.222 "name": "BaseBdev2", 00:23:50.222 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:50.222 "is_configured": true, 00:23:50.222 "data_offset": 0, 00:23:50.222 "data_size": 65536 00:23:50.222 }, 00:23:50.222 { 00:23:50.222 "name": "BaseBdev3", 00:23:50.222 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:50.222 "is_configured": true, 00:23:50.222 "data_offset": 0, 00:23:50.222 "data_size": 65536 00:23:50.222 }, 00:23:50.222 { 00:23:50.222 "name": "BaseBdev4", 00:23:50.222 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:50.222 "is_configured": true, 00:23:50.222 "data_offset": 0, 00:23:50.222 "data_size": 65536 00:23:50.222 } 00:23:50.222 ] 00:23:50.222 }' 00:23:50.222 23:05:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.222 23:05:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.222 23:05:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.222 23:05:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.222 23:05:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:51.598 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:51.598 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.599 "name": "raid_bdev1", 00:23:51.599 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:51.599 "strip_size_kb": 64, 00:23:51.599 "state": "online", 00:23:51.599 "raid_level": "raid5f", 00:23:51.599 "superblock": false, 00:23:51.599 "num_base_bdevs": 4, 00:23:51.599 "num_base_bdevs_discovered": 4, 00:23:51.599 "num_base_bdevs_operational": 4, 00:23:51.599 "base_bdevs_list": [ 00:23:51.599 { 00:23:51.599 "name": "spare", 00:23:51.599 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev2", 00:23:51.599 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev3", 00:23:51.599 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev4", 00:23:51.599 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 } 00:23:51.599 ] 00:23:51.599 }' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.599 "name": "raid_bdev1", 00:23:51.599 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:51.599 "strip_size_kb": 64, 00:23:51.599 "state": "online", 00:23:51.599 "raid_level": "raid5f", 00:23:51.599 "superblock": false, 00:23:51.599 "num_base_bdevs": 4, 00:23:51.599 "num_base_bdevs_discovered": 4, 00:23:51.599 "num_base_bdevs_operational": 4, 00:23:51.599 "base_bdevs_list": [ 00:23:51.599 { 00:23:51.599 "name": "spare", 00:23:51.599 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev2", 00:23:51.599 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev3", 00:23:51.599 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev4", 00:23:51.599 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 } 00:23:51.599 ] 00:23:51.599 }' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.599 "name": "raid_bdev1", 00:23:51.599 "uuid": "9f942448-36c4-49ca-bcc9-de486334030e", 00:23:51.599 "strip_size_kb": 64, 00:23:51.599 "state": "online", 00:23:51.599 "raid_level": "raid5f", 00:23:51.599 "superblock": false, 00:23:51.599 "num_base_bdevs": 4, 00:23:51.599 "num_base_bdevs_discovered": 4, 00:23:51.599 "num_base_bdevs_operational": 4, 00:23:51.599 "base_bdevs_list": [ 00:23:51.599 { 00:23:51.599 "name": "spare", 00:23:51.599 "uuid": "4e6c1424-cc65-519b-bab9-1adc5824d31c", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev2", 00:23:51.599 "uuid": "2ecee18c-f0d2-5417-a2ee-c4fbab83149f", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev3", 00:23:51.599 "uuid": "410e0faf-7d79-5e1c-9f57-8830fcd3386f", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 }, 00:23:51.599 { 00:23:51.599 "name": "BaseBdev4", 00:23:51.599 "uuid": "1d05d2a5-ded3-509b-bb5d-6fd2019667a7", 00:23:51.599 "is_configured": true, 00:23:51.599 "data_offset": 0, 00:23:51.599 "data_size": 65536 00:23:51.599 } 00:23:51.599 ] 00:23:51.599 }' 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.599 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.172 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:52.172 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.172 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.172 [2024-12-09 23:05:07.811326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.172 [2024-12-09 23:05:07.811503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.172 [2024-12-09 23:05:07.811655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.172 [2024-12-09 23:05:07.811819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.172 [2024-12-09 23:05:07.811884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:52.172 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:52.173 23:05:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:52.435 /dev/nbd0 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:52.435 1+0 records in 00:23:52.435 1+0 records out 00:23:52.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386047 s, 10.6 MB/s 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:52.435 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:52.695 /dev/nbd1 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:52.695 1+0 records in 00:23:52.695 1+0 records out 00:23:52.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340312 s, 12.0 MB/s 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:52.695 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.962 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:53.223 23:05:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85303 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85303 ']' 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85303 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85303 00:23:53.482 killing process with pid 85303 00:23:53.482 Received shutdown signal, test time was about 60.000000 seconds 00:23:53.482 00:23:53.482 Latency(us) 00:23:53.482 [2024-12-09T23:05:09.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.482 [2024-12-09T23:05:09.338Z] =================================================================================================================== 00:23:53.482 [2024-12-09T23:05:09.338Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85303' 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85303 00:23:53.482 [2024-12-09 23:05:09.266855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:53.482 23:05:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85303 00:23:54.051 [2024-12-09 23:05:09.861733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.428 ************************************ 00:23:55.428 END TEST raid5f_rebuild_test 00:23:55.428 ************************************ 00:23:55.428 23:05:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:55.428 00:23:55.428 real 0m21.037s 00:23:55.428 user 0m25.341s 00:23:55.428 sys 0m2.346s 00:23:55.428 23:05:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.428 23:05:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.428 23:05:11 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:23:55.429 23:05:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:55.429 23:05:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.429 23:05:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.429 ************************************ 00:23:55.429 START TEST raid5f_rebuild_test_sb 00:23:55.429 ************************************ 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85829 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85829 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85829 ']' 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.429 23:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.688 [2024-12-09 23:05:11.361429] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:23:55.688 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:55.688 Zero copy mechanism will not be used. 00:23:55.688 [2024-12-09 23:05:11.361675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85829 ] 00:23:55.688 [2024-12-09 23:05:11.540165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.947 [2024-12-09 23:05:11.672814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.206 [2024-12-09 23:05:11.905083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.206 [2024-12-09 23:05:11.905118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.494 BaseBdev1_malloc 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.494 [2024-12-09 23:05:12.284796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:56.494 [2024-12-09 23:05:12.284887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.494 [2024-12-09 23:05:12.285003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:56.494 [2024-12-09 23:05:12.285079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.494 [2024-12-09 23:05:12.287985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.494 [2024-12-09 23:05:12.288114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:56.494 BaseBdev1 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.494 BaseBdev2_malloc 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.494 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.494 [2024-12-09 23:05:12.346157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:56.494 [2024-12-09 23:05:12.346273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.494 [2024-12-09 23:05:12.346330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:56.494 [2024-12-09 23:05:12.346371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.754 [2024-12-09 23:05:12.348636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.754 [2024-12-09 23:05:12.348719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.754 BaseBdev2 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.754 BaseBdev3_malloc 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.754 [2024-12-09 23:05:12.415941] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:56.754 [2024-12-09 23:05:12.416050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.754 [2024-12-09 23:05:12.416105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:56.754 [2024-12-09 23:05:12.416142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.754 [2024-12-09 23:05:12.418240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.754 [2024-12-09 23:05:12.418315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:56.754 BaseBdev3 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.754 BaseBdev4_malloc 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.754 [2024-12-09 23:05:12.472763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:56.754 [2024-12-09 23:05:12.472879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.754 [2024-12-09 23:05:12.472921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:56.754 [2024-12-09 23:05:12.472954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.754 [2024-12-09 23:05:12.475084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.754 [2024-12-09 23:05:12.475157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:56.754 BaseBdev4 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.754 spare_malloc 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.754 spare_delay 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:56.754 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.755 [2024-12-09 23:05:12.540758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.755 [2024-12-09 23:05:12.540886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.755 [2024-12-09 23:05:12.540937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:56.755 [2024-12-09 23:05:12.540974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.755 [2024-12-09 23:05:12.543315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.755 [2024-12-09 23:05:12.543401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.755 spare 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.755 [2024-12-09 23:05:12.552799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.755 [2024-12-09 23:05:12.554804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.755 [2024-12-09 23:05:12.554917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.755 [2024-12-09 23:05:12.555001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:56.755 [2024-12-09 23:05:12.555252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:56.755 [2024-12-09 23:05:12.555306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:56.755 [2024-12-09 23:05:12.555635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:56.755 [2024-12-09 23:05:12.564600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:56.755 [2024-12-09 23:05:12.564672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:56.755 [2024-12-09 23:05:12.564923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.755 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.015 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.015 "name": "raid_bdev1", 00:23:57.015 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:23:57.015 "strip_size_kb": 64, 00:23:57.015 "state": "online", 00:23:57.015 "raid_level": "raid5f", 00:23:57.015 "superblock": true, 00:23:57.015 "num_base_bdevs": 4, 00:23:57.015 "num_base_bdevs_discovered": 4, 00:23:57.015 "num_base_bdevs_operational": 4, 00:23:57.015 "base_bdevs_list": [ 00:23:57.015 { 00:23:57.015 "name": "BaseBdev1", 00:23:57.015 "uuid": "2d58f88a-d7b9-5e8c-bb8b-a2055b6b7f9a", 00:23:57.015 "is_configured": true, 00:23:57.015 "data_offset": 2048, 00:23:57.015 "data_size": 63488 00:23:57.015 }, 00:23:57.015 { 00:23:57.015 "name": "BaseBdev2", 00:23:57.015 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:23:57.015 "is_configured": true, 00:23:57.015 "data_offset": 2048, 00:23:57.015 "data_size": 63488 00:23:57.015 }, 00:23:57.015 { 00:23:57.015 "name": "BaseBdev3", 00:23:57.015 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:23:57.015 "is_configured": true, 00:23:57.015 "data_offset": 2048, 00:23:57.015 "data_size": 63488 00:23:57.015 }, 00:23:57.015 { 00:23:57.015 "name": "BaseBdev4", 00:23:57.015 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:23:57.015 "is_configured": true, 00:23:57.015 "data_offset": 2048, 00:23:57.015 "data_size": 63488 00:23:57.015 } 00:23:57.015 ] 00:23:57.015 }' 00:23:57.015 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.015 23:05:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.274 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.275 [2024-12-09 23:05:13.053917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:57.275 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.537 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:57.537 [2024-12-09 23:05:13.341218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:57.537 /dev/nbd0 00:23:57.801 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:57.801 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:57.801 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:57.802 1+0 records in 00:23:57.802 1+0 records out 00:23:57.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403861 s, 10.1 MB/s 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:57.802 23:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:58.384 496+0 records in 00:23:58.384 496+0 records out 00:23:58.384 97517568 bytes (98 MB, 93 MiB) copied, 0.659974 s, 148 MB/s 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:58.384 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:58.655 [2024-12-09 23:05:14.314630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.655 [2024-12-09 23:05:14.333432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.655 "name": "raid_bdev1", 00:23:58.655 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:23:58.655 "strip_size_kb": 64, 00:23:58.655 "state": "online", 00:23:58.655 "raid_level": "raid5f", 00:23:58.655 "superblock": true, 00:23:58.655 "num_base_bdevs": 4, 00:23:58.655 "num_base_bdevs_discovered": 3, 00:23:58.655 "num_base_bdevs_operational": 3, 00:23:58.655 "base_bdevs_list": [ 00:23:58.655 { 00:23:58.655 "name": null, 00:23:58.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.655 "is_configured": false, 00:23:58.655 "data_offset": 0, 00:23:58.655 "data_size": 63488 00:23:58.655 }, 00:23:58.655 { 00:23:58.655 "name": "BaseBdev2", 00:23:58.655 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:23:58.655 "is_configured": true, 00:23:58.655 "data_offset": 2048, 00:23:58.655 "data_size": 63488 00:23:58.655 }, 00:23:58.655 { 00:23:58.655 "name": "BaseBdev3", 00:23:58.655 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:23:58.655 "is_configured": true, 00:23:58.655 "data_offset": 2048, 00:23:58.655 "data_size": 63488 00:23:58.655 }, 00:23:58.655 { 00:23:58.655 "name": "BaseBdev4", 00:23:58.655 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:23:58.655 "is_configured": true, 00:23:58.655 "data_offset": 2048, 00:23:58.655 "data_size": 63488 00:23:58.655 } 00:23:58.655 ] 00:23:58.655 }' 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.655 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:58.927 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.927 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.927 [2024-12-09 23:05:14.764713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:58.927 [2024-12-09 23:05:14.782183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:23:58.927 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.927 23:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:59.208 [2024-12-09 23:05:14.793215] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.165 "name": "raid_bdev1", 00:24:00.165 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:00.165 "strip_size_kb": 64, 00:24:00.165 "state": "online", 00:24:00.165 "raid_level": "raid5f", 00:24:00.165 "superblock": true, 00:24:00.165 "num_base_bdevs": 4, 00:24:00.165 "num_base_bdevs_discovered": 4, 00:24:00.165 "num_base_bdevs_operational": 4, 00:24:00.165 "process": { 00:24:00.165 "type": "rebuild", 00:24:00.165 "target": "spare", 00:24:00.165 "progress": { 00:24:00.165 "blocks": 17280, 00:24:00.165 "percent": 9 00:24:00.165 } 00:24:00.165 }, 00:24:00.165 "base_bdevs_list": [ 00:24:00.165 { 00:24:00.165 "name": "spare", 00:24:00.165 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:00.165 "is_configured": true, 00:24:00.165 "data_offset": 2048, 00:24:00.165 "data_size": 63488 00:24:00.165 }, 00:24:00.165 { 00:24:00.165 "name": "BaseBdev2", 00:24:00.165 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:00.165 "is_configured": true, 00:24:00.165 "data_offset": 2048, 00:24:00.165 "data_size": 63488 00:24:00.165 }, 00:24:00.165 { 00:24:00.165 "name": "BaseBdev3", 00:24:00.165 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:00.165 "is_configured": true, 00:24:00.165 "data_offset": 2048, 00:24:00.165 "data_size": 63488 00:24:00.165 }, 00:24:00.165 { 00:24:00.165 "name": "BaseBdev4", 00:24:00.165 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:00.165 "is_configured": true, 00:24:00.165 "data_offset": 2048, 00:24:00.165 "data_size": 63488 00:24:00.165 } 00:24:00.165 ] 00:24:00.165 }' 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.165 23:05:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.165 [2024-12-09 23:05:15.948842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.165 [2024-12-09 23:05:16.002991] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:00.165 [2024-12-09 23:05:16.003198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.165 [2024-12-09 23:05:16.003253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.165 [2024-12-09 23:05:16.003285] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.423 "name": "raid_bdev1", 00:24:00.423 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:00.423 "strip_size_kb": 64, 00:24:00.423 "state": "online", 00:24:00.423 "raid_level": "raid5f", 00:24:00.423 "superblock": true, 00:24:00.423 "num_base_bdevs": 4, 00:24:00.423 "num_base_bdevs_discovered": 3, 00:24:00.423 "num_base_bdevs_operational": 3, 00:24:00.423 "base_bdevs_list": [ 00:24:00.423 { 00:24:00.423 "name": null, 00:24:00.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.423 "is_configured": false, 00:24:00.423 "data_offset": 0, 00:24:00.423 "data_size": 63488 00:24:00.423 }, 00:24:00.423 { 00:24:00.423 "name": "BaseBdev2", 00:24:00.423 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:00.423 "is_configured": true, 00:24:00.423 "data_offset": 2048, 00:24:00.423 "data_size": 63488 00:24:00.423 }, 00:24:00.423 { 00:24:00.423 "name": "BaseBdev3", 00:24:00.423 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:00.423 "is_configured": true, 00:24:00.423 "data_offset": 2048, 00:24:00.423 "data_size": 63488 00:24:00.423 }, 00:24:00.423 { 00:24:00.423 "name": "BaseBdev4", 00:24:00.423 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:00.423 "is_configured": true, 00:24:00.423 "data_offset": 2048, 00:24:00.423 "data_size": 63488 00:24:00.423 } 00:24:00.423 ] 00:24:00.423 }' 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.423 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.682 "name": "raid_bdev1", 00:24:00.682 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:00.682 "strip_size_kb": 64, 00:24:00.682 "state": "online", 00:24:00.682 "raid_level": "raid5f", 00:24:00.682 "superblock": true, 00:24:00.682 "num_base_bdevs": 4, 00:24:00.682 "num_base_bdevs_discovered": 3, 00:24:00.682 "num_base_bdevs_operational": 3, 00:24:00.682 "base_bdevs_list": [ 00:24:00.682 { 00:24:00.682 "name": null, 00:24:00.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.682 "is_configured": false, 00:24:00.682 "data_offset": 0, 00:24:00.682 "data_size": 63488 00:24:00.682 }, 00:24:00.682 { 00:24:00.682 "name": "BaseBdev2", 00:24:00.682 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:00.682 "is_configured": true, 00:24:00.682 "data_offset": 2048, 00:24:00.682 "data_size": 63488 00:24:00.682 }, 00:24:00.682 { 00:24:00.682 "name": "BaseBdev3", 00:24:00.682 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:00.682 "is_configured": true, 00:24:00.682 "data_offset": 2048, 00:24:00.682 "data_size": 63488 00:24:00.682 }, 00:24:00.682 { 00:24:00.682 "name": "BaseBdev4", 00:24:00.682 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:00.682 "is_configured": true, 00:24:00.682 "data_offset": 2048, 00:24:00.682 "data_size": 63488 00:24:00.682 } 00:24:00.682 ] 00:24:00.682 }' 00:24:00.682 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.941 [2024-12-09 23:05:16.607647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:00.941 [2024-12-09 23:05:16.625966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.941 23:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:00.941 [2024-12-09 23:05:16.638257] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.875 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.875 "name": "raid_bdev1", 00:24:01.875 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:01.875 "strip_size_kb": 64, 00:24:01.875 "state": "online", 00:24:01.875 "raid_level": "raid5f", 00:24:01.875 "superblock": true, 00:24:01.875 "num_base_bdevs": 4, 00:24:01.875 "num_base_bdevs_discovered": 4, 00:24:01.875 "num_base_bdevs_operational": 4, 00:24:01.875 "process": { 00:24:01.875 "type": "rebuild", 00:24:01.875 "target": "spare", 00:24:01.875 "progress": { 00:24:01.875 "blocks": 17280, 00:24:01.875 "percent": 9 00:24:01.875 } 00:24:01.875 }, 00:24:01.875 "base_bdevs_list": [ 00:24:01.875 { 00:24:01.875 "name": "spare", 00:24:01.875 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:01.875 "is_configured": true, 00:24:01.875 "data_offset": 2048, 00:24:01.875 "data_size": 63488 00:24:01.875 }, 00:24:01.875 { 00:24:01.875 "name": "BaseBdev2", 00:24:01.875 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:01.876 "is_configured": true, 00:24:01.876 "data_offset": 2048, 00:24:01.876 "data_size": 63488 00:24:01.876 }, 00:24:01.876 { 00:24:01.876 "name": "BaseBdev3", 00:24:01.876 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:01.876 "is_configured": true, 00:24:01.876 "data_offset": 2048, 00:24:01.876 "data_size": 63488 00:24:01.876 }, 00:24:01.876 { 00:24:01.876 "name": "BaseBdev4", 00:24:01.876 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:01.876 "is_configured": true, 00:24:01.876 "data_offset": 2048, 00:24:01.876 "data_size": 63488 00:24:01.876 } 00:24:01.876 ] 00:24:01.876 }' 00:24:01.876 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:02.135 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=675 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:02.135 "name": "raid_bdev1", 00:24:02.135 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:02.135 "strip_size_kb": 64, 00:24:02.135 "state": "online", 00:24:02.135 "raid_level": "raid5f", 00:24:02.135 "superblock": true, 00:24:02.135 "num_base_bdevs": 4, 00:24:02.135 "num_base_bdevs_discovered": 4, 00:24:02.135 "num_base_bdevs_operational": 4, 00:24:02.135 "process": { 00:24:02.135 "type": "rebuild", 00:24:02.135 "target": "spare", 00:24:02.135 "progress": { 00:24:02.135 "blocks": 21120, 00:24:02.135 "percent": 11 00:24:02.135 } 00:24:02.135 }, 00:24:02.135 "base_bdevs_list": [ 00:24:02.135 { 00:24:02.135 "name": "spare", 00:24:02.135 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:02.135 "is_configured": true, 00:24:02.135 "data_offset": 2048, 00:24:02.135 "data_size": 63488 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "name": "BaseBdev2", 00:24:02.135 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:02.135 "is_configured": true, 00:24:02.135 "data_offset": 2048, 00:24:02.135 "data_size": 63488 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "name": "BaseBdev3", 00:24:02.135 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:02.135 "is_configured": true, 00:24:02.135 "data_offset": 2048, 00:24:02.135 "data_size": 63488 00:24:02.135 }, 00:24:02.135 { 00:24:02.135 "name": "BaseBdev4", 00:24:02.135 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:02.135 "is_configured": true, 00:24:02.135 "data_offset": 2048, 00:24:02.135 "data_size": 63488 00:24:02.135 } 00:24:02.135 ] 00:24:02.135 }' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.135 23:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.512 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.512 "name": "raid_bdev1", 00:24:03.512 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:03.512 "strip_size_kb": 64, 00:24:03.512 "state": "online", 00:24:03.512 "raid_level": "raid5f", 00:24:03.512 "superblock": true, 00:24:03.512 "num_base_bdevs": 4, 00:24:03.512 "num_base_bdevs_discovered": 4, 00:24:03.512 "num_base_bdevs_operational": 4, 00:24:03.512 "process": { 00:24:03.512 "type": "rebuild", 00:24:03.512 "target": "spare", 00:24:03.512 "progress": { 00:24:03.512 "blocks": 42240, 00:24:03.512 "percent": 22 00:24:03.512 } 00:24:03.512 }, 00:24:03.512 "base_bdevs_list": [ 00:24:03.512 { 00:24:03.512 "name": "spare", 00:24:03.512 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:03.512 "is_configured": true, 00:24:03.512 "data_offset": 2048, 00:24:03.512 "data_size": 63488 00:24:03.512 }, 00:24:03.512 { 00:24:03.512 "name": "BaseBdev2", 00:24:03.512 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:03.512 "is_configured": true, 00:24:03.512 "data_offset": 2048, 00:24:03.512 "data_size": 63488 00:24:03.512 }, 00:24:03.512 { 00:24:03.512 "name": "BaseBdev3", 00:24:03.512 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:03.513 "is_configured": true, 00:24:03.513 "data_offset": 2048, 00:24:03.513 "data_size": 63488 00:24:03.513 }, 00:24:03.513 { 00:24:03.513 "name": "BaseBdev4", 00:24:03.513 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:03.513 "is_configured": true, 00:24:03.513 "data_offset": 2048, 00:24:03.513 "data_size": 63488 00:24:03.513 } 00:24:03.513 ] 00:24:03.513 }' 00:24:03.513 23:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.513 23:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.513 23:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.513 23:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.513 23:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.449 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.449 "name": "raid_bdev1", 00:24:04.449 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:04.449 "strip_size_kb": 64, 00:24:04.449 "state": "online", 00:24:04.449 "raid_level": "raid5f", 00:24:04.449 "superblock": true, 00:24:04.449 "num_base_bdevs": 4, 00:24:04.449 "num_base_bdevs_discovered": 4, 00:24:04.449 "num_base_bdevs_operational": 4, 00:24:04.449 "process": { 00:24:04.449 "type": "rebuild", 00:24:04.449 "target": "spare", 00:24:04.449 "progress": { 00:24:04.449 "blocks": 65280, 00:24:04.449 "percent": 34 00:24:04.449 } 00:24:04.449 }, 00:24:04.449 "base_bdevs_list": [ 00:24:04.449 { 00:24:04.449 "name": "spare", 00:24:04.449 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:04.449 "is_configured": true, 00:24:04.449 "data_offset": 2048, 00:24:04.449 "data_size": 63488 00:24:04.449 }, 00:24:04.449 { 00:24:04.449 "name": "BaseBdev2", 00:24:04.449 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:04.449 "is_configured": true, 00:24:04.449 "data_offset": 2048, 00:24:04.449 "data_size": 63488 00:24:04.449 }, 00:24:04.449 { 00:24:04.449 "name": "BaseBdev3", 00:24:04.449 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:04.449 "is_configured": true, 00:24:04.449 "data_offset": 2048, 00:24:04.449 "data_size": 63488 00:24:04.449 }, 00:24:04.449 { 00:24:04.449 "name": "BaseBdev4", 00:24:04.449 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:04.449 "is_configured": true, 00:24:04.449 "data_offset": 2048, 00:24:04.449 "data_size": 63488 00:24:04.449 } 00:24:04.449 ] 00:24:04.450 }' 00:24:04.450 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.450 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.450 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.450 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.450 23:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.384 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.642 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.642 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.642 "name": "raid_bdev1", 00:24:05.642 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:05.642 "strip_size_kb": 64, 00:24:05.642 "state": "online", 00:24:05.642 "raid_level": "raid5f", 00:24:05.642 "superblock": true, 00:24:05.642 "num_base_bdevs": 4, 00:24:05.642 "num_base_bdevs_discovered": 4, 00:24:05.642 "num_base_bdevs_operational": 4, 00:24:05.642 "process": { 00:24:05.642 "type": "rebuild", 00:24:05.642 "target": "spare", 00:24:05.642 "progress": { 00:24:05.642 "blocks": 86400, 00:24:05.642 "percent": 45 00:24:05.642 } 00:24:05.642 }, 00:24:05.642 "base_bdevs_list": [ 00:24:05.642 { 00:24:05.642 "name": "spare", 00:24:05.642 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:05.642 "is_configured": true, 00:24:05.642 "data_offset": 2048, 00:24:05.642 "data_size": 63488 00:24:05.642 }, 00:24:05.642 { 00:24:05.642 "name": "BaseBdev2", 00:24:05.642 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:05.642 "is_configured": true, 00:24:05.642 "data_offset": 2048, 00:24:05.642 "data_size": 63488 00:24:05.642 }, 00:24:05.642 { 00:24:05.642 "name": "BaseBdev3", 00:24:05.642 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:05.642 "is_configured": true, 00:24:05.642 "data_offset": 2048, 00:24:05.642 "data_size": 63488 00:24:05.642 }, 00:24:05.642 { 00:24:05.642 "name": "BaseBdev4", 00:24:05.642 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:05.642 "is_configured": true, 00:24:05.642 "data_offset": 2048, 00:24:05.642 "data_size": 63488 00:24:05.642 } 00:24:05.642 ] 00:24:05.642 }' 00:24:05.642 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.642 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.642 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.643 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.643 23:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.577 "name": "raid_bdev1", 00:24:06.577 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:06.577 "strip_size_kb": 64, 00:24:06.577 "state": "online", 00:24:06.577 "raid_level": "raid5f", 00:24:06.577 "superblock": true, 00:24:06.577 "num_base_bdevs": 4, 00:24:06.577 "num_base_bdevs_discovered": 4, 00:24:06.577 "num_base_bdevs_operational": 4, 00:24:06.577 "process": { 00:24:06.577 "type": "rebuild", 00:24:06.577 "target": "spare", 00:24:06.577 "progress": { 00:24:06.577 "blocks": 107520, 00:24:06.577 "percent": 56 00:24:06.577 } 00:24:06.577 }, 00:24:06.577 "base_bdevs_list": [ 00:24:06.577 { 00:24:06.577 "name": "spare", 00:24:06.577 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:06.577 "is_configured": true, 00:24:06.577 "data_offset": 2048, 00:24:06.577 "data_size": 63488 00:24:06.577 }, 00:24:06.577 { 00:24:06.577 "name": "BaseBdev2", 00:24:06.577 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:06.577 "is_configured": true, 00:24:06.577 "data_offset": 2048, 00:24:06.577 "data_size": 63488 00:24:06.577 }, 00:24:06.577 { 00:24:06.577 "name": "BaseBdev3", 00:24:06.577 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:06.577 "is_configured": true, 00:24:06.577 "data_offset": 2048, 00:24:06.577 "data_size": 63488 00:24:06.577 }, 00:24:06.577 { 00:24:06.577 "name": "BaseBdev4", 00:24:06.577 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:06.577 "is_configured": true, 00:24:06.577 "data_offset": 2048, 00:24:06.577 "data_size": 63488 00:24:06.577 } 00:24:06.577 ] 00:24:06.577 }' 00:24:06.577 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.835 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.835 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:06.835 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.835 23:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.838 "name": "raid_bdev1", 00:24:07.838 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:07.838 "strip_size_kb": 64, 00:24:07.838 "state": "online", 00:24:07.838 "raid_level": "raid5f", 00:24:07.838 "superblock": true, 00:24:07.838 "num_base_bdevs": 4, 00:24:07.838 "num_base_bdevs_discovered": 4, 00:24:07.838 "num_base_bdevs_operational": 4, 00:24:07.838 "process": { 00:24:07.838 "type": "rebuild", 00:24:07.838 "target": "spare", 00:24:07.838 "progress": { 00:24:07.838 "blocks": 130560, 00:24:07.838 "percent": 68 00:24:07.838 } 00:24:07.838 }, 00:24:07.838 "base_bdevs_list": [ 00:24:07.838 { 00:24:07.838 "name": "spare", 00:24:07.838 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:07.838 "is_configured": true, 00:24:07.838 "data_offset": 2048, 00:24:07.838 "data_size": 63488 00:24:07.838 }, 00:24:07.838 { 00:24:07.838 "name": "BaseBdev2", 00:24:07.838 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:07.838 "is_configured": true, 00:24:07.838 "data_offset": 2048, 00:24:07.838 "data_size": 63488 00:24:07.838 }, 00:24:07.838 { 00:24:07.838 "name": "BaseBdev3", 00:24:07.838 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:07.838 "is_configured": true, 00:24:07.838 "data_offset": 2048, 00:24:07.838 "data_size": 63488 00:24:07.838 }, 00:24:07.838 { 00:24:07.838 "name": "BaseBdev4", 00:24:07.838 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:07.838 "is_configured": true, 00:24:07.838 "data_offset": 2048, 00:24:07.838 "data_size": 63488 00:24:07.838 } 00:24:07.838 ] 00:24:07.838 }' 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.838 23:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.227 "name": "raid_bdev1", 00:24:09.227 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:09.227 "strip_size_kb": 64, 00:24:09.227 "state": "online", 00:24:09.227 "raid_level": "raid5f", 00:24:09.227 "superblock": true, 00:24:09.227 "num_base_bdevs": 4, 00:24:09.227 "num_base_bdevs_discovered": 4, 00:24:09.227 "num_base_bdevs_operational": 4, 00:24:09.227 "process": { 00:24:09.227 "type": "rebuild", 00:24:09.227 "target": "spare", 00:24:09.227 "progress": { 00:24:09.227 "blocks": 151680, 00:24:09.227 "percent": 79 00:24:09.227 } 00:24:09.227 }, 00:24:09.227 "base_bdevs_list": [ 00:24:09.227 { 00:24:09.227 "name": "spare", 00:24:09.227 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:09.227 "is_configured": true, 00:24:09.227 "data_offset": 2048, 00:24:09.227 "data_size": 63488 00:24:09.227 }, 00:24:09.227 { 00:24:09.227 "name": "BaseBdev2", 00:24:09.227 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:09.227 "is_configured": true, 00:24:09.227 "data_offset": 2048, 00:24:09.227 "data_size": 63488 00:24:09.227 }, 00:24:09.227 { 00:24:09.227 "name": "BaseBdev3", 00:24:09.227 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:09.227 "is_configured": true, 00:24:09.227 "data_offset": 2048, 00:24:09.227 "data_size": 63488 00:24:09.227 }, 00:24:09.227 { 00:24:09.227 "name": "BaseBdev4", 00:24:09.227 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:09.227 "is_configured": true, 00:24:09.227 "data_offset": 2048, 00:24:09.227 "data_size": 63488 00:24:09.227 } 00:24:09.227 ] 00:24:09.227 }' 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.227 23:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.167 "name": "raid_bdev1", 00:24:10.167 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:10.167 "strip_size_kb": 64, 00:24:10.167 "state": "online", 00:24:10.167 "raid_level": "raid5f", 00:24:10.167 "superblock": true, 00:24:10.167 "num_base_bdevs": 4, 00:24:10.167 "num_base_bdevs_discovered": 4, 00:24:10.167 "num_base_bdevs_operational": 4, 00:24:10.167 "process": { 00:24:10.167 "type": "rebuild", 00:24:10.167 "target": "spare", 00:24:10.167 "progress": { 00:24:10.167 "blocks": 174720, 00:24:10.167 "percent": 91 00:24:10.167 } 00:24:10.167 }, 00:24:10.167 "base_bdevs_list": [ 00:24:10.167 { 00:24:10.167 "name": "spare", 00:24:10.167 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:10.167 "is_configured": true, 00:24:10.167 "data_offset": 2048, 00:24:10.167 "data_size": 63488 00:24:10.167 }, 00:24:10.167 { 00:24:10.167 "name": "BaseBdev2", 00:24:10.167 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:10.167 "is_configured": true, 00:24:10.167 "data_offset": 2048, 00:24:10.167 "data_size": 63488 00:24:10.167 }, 00:24:10.167 { 00:24:10.167 "name": "BaseBdev3", 00:24:10.167 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:10.167 "is_configured": true, 00:24:10.167 "data_offset": 2048, 00:24:10.167 "data_size": 63488 00:24:10.167 }, 00:24:10.167 { 00:24:10.167 "name": "BaseBdev4", 00:24:10.167 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:10.167 "is_configured": true, 00:24:10.167 "data_offset": 2048, 00:24:10.167 "data_size": 63488 00:24:10.167 } 00:24:10.167 ] 00:24:10.167 }' 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.167 23:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:11.119 [2024-12-09 23:05:26.725069] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:11.119 [2024-12-09 23:05:26.725266] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:11.119 [2024-12-09 23:05:26.725544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.119 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:11.119 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.119 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.120 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.120 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.120 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.120 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.120 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.385 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.385 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.385 23:05:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.385 "name": "raid_bdev1", 00:24:11.385 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:11.385 "strip_size_kb": 64, 00:24:11.385 "state": "online", 00:24:11.385 "raid_level": "raid5f", 00:24:11.385 "superblock": true, 00:24:11.385 "num_base_bdevs": 4, 00:24:11.385 "num_base_bdevs_discovered": 4, 00:24:11.385 "num_base_bdevs_operational": 4, 00:24:11.385 "base_bdevs_list": [ 00:24:11.385 { 00:24:11.385 "name": "spare", 00:24:11.385 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 }, 00:24:11.385 { 00:24:11.385 "name": "BaseBdev2", 00:24:11.385 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 }, 00:24:11.385 { 00:24:11.385 "name": "BaseBdev3", 00:24:11.385 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 }, 00:24:11.385 { 00:24:11.385 "name": "BaseBdev4", 00:24:11.385 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 } 00:24:11.385 ] 00:24:11.385 }' 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.385 "name": "raid_bdev1", 00:24:11.385 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:11.385 "strip_size_kb": 64, 00:24:11.385 "state": "online", 00:24:11.385 "raid_level": "raid5f", 00:24:11.385 "superblock": true, 00:24:11.385 "num_base_bdevs": 4, 00:24:11.385 "num_base_bdevs_discovered": 4, 00:24:11.385 "num_base_bdevs_operational": 4, 00:24:11.385 "base_bdevs_list": [ 00:24:11.385 { 00:24:11.385 "name": "spare", 00:24:11.385 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 }, 00:24:11.385 { 00:24:11.385 "name": "BaseBdev2", 00:24:11.385 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 }, 00:24:11.385 { 00:24:11.385 "name": "BaseBdev3", 00:24:11.385 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 }, 00:24:11.385 { 00:24:11.385 "name": "BaseBdev4", 00:24:11.385 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:11.385 "is_configured": true, 00:24:11.385 "data_offset": 2048, 00:24:11.385 "data_size": 63488 00:24:11.385 } 00:24:11.385 ] 00:24:11.385 }' 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:11.385 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.645 "name": "raid_bdev1", 00:24:11.645 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:11.645 "strip_size_kb": 64, 00:24:11.645 "state": "online", 00:24:11.645 "raid_level": "raid5f", 00:24:11.645 "superblock": true, 00:24:11.645 "num_base_bdevs": 4, 00:24:11.645 "num_base_bdevs_discovered": 4, 00:24:11.645 "num_base_bdevs_operational": 4, 00:24:11.645 "base_bdevs_list": [ 00:24:11.645 { 00:24:11.645 "name": "spare", 00:24:11.645 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:11.645 "is_configured": true, 00:24:11.645 "data_offset": 2048, 00:24:11.645 "data_size": 63488 00:24:11.645 }, 00:24:11.645 { 00:24:11.645 "name": "BaseBdev2", 00:24:11.645 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:11.645 "is_configured": true, 00:24:11.645 "data_offset": 2048, 00:24:11.645 "data_size": 63488 00:24:11.645 }, 00:24:11.645 { 00:24:11.645 "name": "BaseBdev3", 00:24:11.645 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:11.645 "is_configured": true, 00:24:11.645 "data_offset": 2048, 00:24:11.645 "data_size": 63488 00:24:11.645 }, 00:24:11.645 { 00:24:11.645 "name": "BaseBdev4", 00:24:11.645 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:11.645 "is_configured": true, 00:24:11.645 "data_offset": 2048, 00:24:11.645 "data_size": 63488 00:24:11.645 } 00:24:11.645 ] 00:24:11.645 }' 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.645 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.905 [2024-12-09 23:05:27.701850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:11.905 [2024-12-09 23:05:27.701938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:11.905 [2024-12-09 23:05:27.702067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.905 [2024-12-09 23:05:27.702218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:11.905 [2024-12-09 23:05:27.702295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:11.905 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.163 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:12.163 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.163 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.163 23:05:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:12.163 /dev/nbd0 00:24:12.163 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.164 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.164 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.422 1+0 records in 00:24:12.422 1+0 records out 00:24:12.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404046 s, 10.1 MB/s 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.422 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:12.422 /dev/nbd1 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.681 1+0 records in 00:24:12.681 1+0 records out 00:24:12.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364359 s, 11.2 MB/s 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.681 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:12.682 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:12.941 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:13.201 23:05:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.201 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.460 [2024-12-09 23:05:29.058135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.460 [2024-12-09 23:05:29.058204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.460 [2024-12-09 23:05:29.058230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:13.460 [2024-12-09 23:05:29.058241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.460 [2024-12-09 23:05:29.060981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.460 [2024-12-09 23:05:29.061027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.460 [2024-12-09 23:05:29.061138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:13.460 [2024-12-09 23:05:29.061199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.460 [2024-12-09 23:05:29.061370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:13.460 [2024-12-09 23:05:29.061514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.460 [2024-12-09 23:05:29.061628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:13.460 spare 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.460 [2024-12-09 23:05:29.161555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:13.460 [2024-12-09 23:05:29.161626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:13.460 [2024-12-09 23:05:29.162017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:24:13.460 [2024-12-09 23:05:29.170986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:13.460 [2024-12-09 23:05:29.171088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:13.460 [2024-12-09 23:05:29.171391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.460 "name": "raid_bdev1", 00:24:13.460 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:13.460 "strip_size_kb": 64, 00:24:13.460 "state": "online", 00:24:13.460 "raid_level": "raid5f", 00:24:13.460 "superblock": true, 00:24:13.460 "num_base_bdevs": 4, 00:24:13.460 "num_base_bdevs_discovered": 4, 00:24:13.460 "num_base_bdevs_operational": 4, 00:24:13.460 "base_bdevs_list": [ 00:24:13.460 { 00:24:13.460 "name": "spare", 00:24:13.460 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:13.460 "is_configured": true, 00:24:13.460 "data_offset": 2048, 00:24:13.460 "data_size": 63488 00:24:13.460 }, 00:24:13.460 { 00:24:13.460 "name": "BaseBdev2", 00:24:13.460 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:13.460 "is_configured": true, 00:24:13.460 "data_offset": 2048, 00:24:13.460 "data_size": 63488 00:24:13.460 }, 00:24:13.460 { 00:24:13.460 "name": "BaseBdev3", 00:24:13.460 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:13.460 "is_configured": true, 00:24:13.460 "data_offset": 2048, 00:24:13.460 "data_size": 63488 00:24:13.460 }, 00:24:13.460 { 00:24:13.460 "name": "BaseBdev4", 00:24:13.460 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:13.460 "is_configured": true, 00:24:13.460 "data_offset": 2048, 00:24:13.460 "data_size": 63488 00:24:13.460 } 00:24:13.460 ] 00:24:13.460 }' 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.460 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.029 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.029 "name": "raid_bdev1", 00:24:14.029 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:14.029 "strip_size_kb": 64, 00:24:14.029 "state": "online", 00:24:14.029 "raid_level": "raid5f", 00:24:14.029 "superblock": true, 00:24:14.029 "num_base_bdevs": 4, 00:24:14.029 "num_base_bdevs_discovered": 4, 00:24:14.029 "num_base_bdevs_operational": 4, 00:24:14.029 "base_bdevs_list": [ 00:24:14.029 { 00:24:14.029 "name": "spare", 00:24:14.029 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:14.029 "is_configured": true, 00:24:14.029 "data_offset": 2048, 00:24:14.029 "data_size": 63488 00:24:14.029 }, 00:24:14.029 { 00:24:14.029 "name": "BaseBdev2", 00:24:14.029 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:14.029 "is_configured": true, 00:24:14.029 "data_offset": 2048, 00:24:14.029 "data_size": 63488 00:24:14.029 }, 00:24:14.030 { 00:24:14.030 "name": "BaseBdev3", 00:24:14.030 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:14.030 "is_configured": true, 00:24:14.030 "data_offset": 2048, 00:24:14.030 "data_size": 63488 00:24:14.030 }, 00:24:14.030 { 00:24:14.030 "name": "BaseBdev4", 00:24:14.030 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:14.030 "is_configured": true, 00:24:14.030 "data_offset": 2048, 00:24:14.030 "data_size": 63488 00:24:14.030 } 00:24:14.030 ] 00:24:14.030 }' 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.030 [2024-12-09 23:05:29.808897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.030 "name": "raid_bdev1", 00:24:14.030 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:14.030 "strip_size_kb": 64, 00:24:14.030 "state": "online", 00:24:14.030 "raid_level": "raid5f", 00:24:14.030 "superblock": true, 00:24:14.030 "num_base_bdevs": 4, 00:24:14.030 "num_base_bdevs_discovered": 3, 00:24:14.030 "num_base_bdevs_operational": 3, 00:24:14.030 "base_bdevs_list": [ 00:24:14.030 { 00:24:14.030 "name": null, 00:24:14.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.030 "is_configured": false, 00:24:14.030 "data_offset": 0, 00:24:14.030 "data_size": 63488 00:24:14.030 }, 00:24:14.030 { 00:24:14.030 "name": "BaseBdev2", 00:24:14.030 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:14.030 "is_configured": true, 00:24:14.030 "data_offset": 2048, 00:24:14.030 "data_size": 63488 00:24:14.030 }, 00:24:14.030 { 00:24:14.030 "name": "BaseBdev3", 00:24:14.030 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:14.030 "is_configured": true, 00:24:14.030 "data_offset": 2048, 00:24:14.030 "data_size": 63488 00:24:14.030 }, 00:24:14.030 { 00:24:14.030 "name": "BaseBdev4", 00:24:14.030 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:14.030 "is_configured": true, 00:24:14.030 "data_offset": 2048, 00:24:14.030 "data_size": 63488 00:24:14.030 } 00:24:14.030 ] 00:24:14.030 }' 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.030 23:05:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.599 23:05:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:14.599 23:05:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.599 23:05:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.599 [2024-12-09 23:05:30.220729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.599 [2024-12-09 23:05:30.221019] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:14.599 [2024-12-09 23:05:30.221066] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:14.599 [2024-12-09 23:05:30.221114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.599 [2024-12-09 23:05:30.240214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:24:14.599 23:05:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.599 23:05:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:14.599 [2024-12-09 23:05:30.252398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:15.534 "name": "raid_bdev1", 00:24:15.534 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:15.534 "strip_size_kb": 64, 00:24:15.534 "state": "online", 00:24:15.534 "raid_level": "raid5f", 00:24:15.534 "superblock": true, 00:24:15.534 "num_base_bdevs": 4, 00:24:15.534 "num_base_bdevs_discovered": 4, 00:24:15.534 "num_base_bdevs_operational": 4, 00:24:15.534 "process": { 00:24:15.534 "type": "rebuild", 00:24:15.534 "target": "spare", 00:24:15.534 "progress": { 00:24:15.534 "blocks": 17280, 00:24:15.534 "percent": 9 00:24:15.534 } 00:24:15.534 }, 00:24:15.534 "base_bdevs_list": [ 00:24:15.534 { 00:24:15.534 "name": "spare", 00:24:15.534 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:15.534 "is_configured": true, 00:24:15.534 "data_offset": 2048, 00:24:15.534 "data_size": 63488 00:24:15.534 }, 00:24:15.534 { 00:24:15.534 "name": "BaseBdev2", 00:24:15.534 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:15.534 "is_configured": true, 00:24:15.534 "data_offset": 2048, 00:24:15.534 "data_size": 63488 00:24:15.534 }, 00:24:15.534 { 00:24:15.534 "name": "BaseBdev3", 00:24:15.534 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:15.534 "is_configured": true, 00:24:15.534 "data_offset": 2048, 00:24:15.534 "data_size": 63488 00:24:15.534 }, 00:24:15.534 { 00:24:15.534 "name": "BaseBdev4", 00:24:15.534 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:15.534 "is_configured": true, 00:24:15.534 "data_offset": 2048, 00:24:15.534 "data_size": 63488 00:24:15.534 } 00:24:15.534 ] 00:24:15.534 }' 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.534 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.793 [2024-12-09 23:05:31.403993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.793 [2024-12-09 23:05:31.462177] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:15.793 [2024-12-09 23:05:31.462396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.793 [2024-12-09 23:05:31.462454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:15.793 [2024-12-09 23:05:31.462525] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.793 "name": "raid_bdev1", 00:24:15.793 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:15.793 "strip_size_kb": 64, 00:24:15.793 "state": "online", 00:24:15.793 "raid_level": "raid5f", 00:24:15.793 "superblock": true, 00:24:15.793 "num_base_bdevs": 4, 00:24:15.793 "num_base_bdevs_discovered": 3, 00:24:15.793 "num_base_bdevs_operational": 3, 00:24:15.793 "base_bdevs_list": [ 00:24:15.793 { 00:24:15.793 "name": null, 00:24:15.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.793 "is_configured": false, 00:24:15.793 "data_offset": 0, 00:24:15.793 "data_size": 63488 00:24:15.793 }, 00:24:15.793 { 00:24:15.793 "name": "BaseBdev2", 00:24:15.793 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:15.793 "is_configured": true, 00:24:15.793 "data_offset": 2048, 00:24:15.793 "data_size": 63488 00:24:15.793 }, 00:24:15.793 { 00:24:15.793 "name": "BaseBdev3", 00:24:15.793 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:15.793 "is_configured": true, 00:24:15.793 "data_offset": 2048, 00:24:15.793 "data_size": 63488 00:24:15.793 }, 00:24:15.793 { 00:24:15.793 "name": "BaseBdev4", 00:24:15.793 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:15.793 "is_configured": true, 00:24:15.793 "data_offset": 2048, 00:24:15.793 "data_size": 63488 00:24:15.793 } 00:24:15.793 ] 00:24:15.793 }' 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.793 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.377 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:16.377 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.377 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.377 [2024-12-09 23:05:31.940719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:16.377 [2024-12-09 23:05:31.940801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.377 [2024-12-09 23:05:31.940834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:16.377 [2024-12-09 23:05:31.940849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.377 [2024-12-09 23:05:31.941477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.377 [2024-12-09 23:05:31.941520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:16.377 [2024-12-09 23:05:31.941641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:16.377 [2024-12-09 23:05:31.941661] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:16.377 [2024-12-09 23:05:31.941673] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:16.377 [2024-12-09 23:05:31.941711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:16.377 [2024-12-09 23:05:31.961151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:24:16.377 spare 00:24:16.377 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.377 23:05:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:16.377 [2024-12-09 23:05:31.973398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:17.316 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.316 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:17.316 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:17.316 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:17.316 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:17.316 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.317 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.317 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.317 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.317 23:05:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:17.317 "name": "raid_bdev1", 00:24:17.317 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:17.317 "strip_size_kb": 64, 00:24:17.317 "state": "online", 00:24:17.317 "raid_level": "raid5f", 00:24:17.317 "superblock": true, 00:24:17.317 "num_base_bdevs": 4, 00:24:17.317 "num_base_bdevs_discovered": 4, 00:24:17.317 "num_base_bdevs_operational": 4, 00:24:17.317 "process": { 00:24:17.317 "type": "rebuild", 00:24:17.317 "target": "spare", 00:24:17.317 "progress": { 00:24:17.317 "blocks": 17280, 00:24:17.317 "percent": 9 00:24:17.317 } 00:24:17.317 }, 00:24:17.317 "base_bdevs_list": [ 00:24:17.317 { 00:24:17.317 "name": "spare", 00:24:17.317 "uuid": "1684b128-a786-5ed4-8324-11ab60ee180c", 00:24:17.317 "is_configured": true, 00:24:17.317 "data_offset": 2048, 00:24:17.317 "data_size": 63488 00:24:17.317 }, 00:24:17.317 { 00:24:17.317 "name": "BaseBdev2", 00:24:17.317 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:17.317 "is_configured": true, 00:24:17.317 "data_offset": 2048, 00:24:17.317 "data_size": 63488 00:24:17.317 }, 00:24:17.317 { 00:24:17.317 "name": "BaseBdev3", 00:24:17.317 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:17.317 "is_configured": true, 00:24:17.317 "data_offset": 2048, 00:24:17.317 "data_size": 63488 00:24:17.317 }, 00:24:17.317 { 00:24:17.317 "name": "BaseBdev4", 00:24:17.317 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:17.317 "is_configured": true, 00:24:17.317 "data_offset": 2048, 00:24:17.317 "data_size": 63488 00:24:17.317 } 00:24:17.317 ] 00:24:17.317 }' 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.317 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.317 [2024-12-09 23:05:33.105683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:17.576 [2024-12-09 23:05:33.184149] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:17.576 [2024-12-09 23:05:33.184244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.576 [2024-12-09 23:05:33.184272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:17.576 [2024-12-09 23:05:33.184282] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.576 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.577 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.577 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.577 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.577 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:17.577 "name": "raid_bdev1", 00:24:17.577 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:17.577 "strip_size_kb": 64, 00:24:17.577 "state": "online", 00:24:17.577 "raid_level": "raid5f", 00:24:17.577 "superblock": true, 00:24:17.577 "num_base_bdevs": 4, 00:24:17.577 "num_base_bdevs_discovered": 3, 00:24:17.577 "num_base_bdevs_operational": 3, 00:24:17.577 "base_bdevs_list": [ 00:24:17.577 { 00:24:17.577 "name": null, 00:24:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.577 "is_configured": false, 00:24:17.577 "data_offset": 0, 00:24:17.577 "data_size": 63488 00:24:17.577 }, 00:24:17.577 { 00:24:17.577 "name": "BaseBdev2", 00:24:17.577 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:17.577 "is_configured": true, 00:24:17.577 "data_offset": 2048, 00:24:17.577 "data_size": 63488 00:24:17.577 }, 00:24:17.577 { 00:24:17.577 "name": "BaseBdev3", 00:24:17.577 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:17.577 "is_configured": true, 00:24:17.577 "data_offset": 2048, 00:24:17.577 "data_size": 63488 00:24:17.577 }, 00:24:17.577 { 00:24:17.577 "name": "BaseBdev4", 00:24:17.577 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:17.577 "is_configured": true, 00:24:17.577 "data_offset": 2048, 00:24:17.577 "data_size": 63488 00:24:17.577 } 00:24:17.577 ] 00:24:17.577 }' 00:24:17.577 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:17.577 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.836 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.836 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:17.836 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:17.836 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:17.836 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.095 "name": "raid_bdev1", 00:24:18.095 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:18.095 "strip_size_kb": 64, 00:24:18.095 "state": "online", 00:24:18.095 "raid_level": "raid5f", 00:24:18.095 "superblock": true, 00:24:18.095 "num_base_bdevs": 4, 00:24:18.095 "num_base_bdevs_discovered": 3, 00:24:18.095 "num_base_bdevs_operational": 3, 00:24:18.095 "base_bdevs_list": [ 00:24:18.095 { 00:24:18.095 "name": null, 00:24:18.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.095 "is_configured": false, 00:24:18.095 "data_offset": 0, 00:24:18.095 "data_size": 63488 00:24:18.095 }, 00:24:18.095 { 00:24:18.095 "name": "BaseBdev2", 00:24:18.095 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:18.095 "is_configured": true, 00:24:18.095 "data_offset": 2048, 00:24:18.095 "data_size": 63488 00:24:18.095 }, 00:24:18.095 { 00:24:18.095 "name": "BaseBdev3", 00:24:18.095 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:18.095 "is_configured": true, 00:24:18.095 "data_offset": 2048, 00:24:18.095 "data_size": 63488 00:24:18.095 }, 00:24:18.095 { 00:24:18.095 "name": "BaseBdev4", 00:24:18.095 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:18.095 "is_configured": true, 00:24:18.095 "data_offset": 2048, 00:24:18.095 "data_size": 63488 00:24:18.095 } 00:24:18.095 ] 00:24:18.095 }' 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.095 [2024-12-09 23:05:33.828715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:18.095 [2024-12-09 23:05:33.828798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.095 [2024-12-09 23:05:33.828827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:18.095 [2024-12-09 23:05:33.828840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.095 [2024-12-09 23:05:33.829487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.095 [2024-12-09 23:05:33.829587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:18.095 [2024-12-09 23:05:33.829717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:18.095 [2024-12-09 23:05:33.829736] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:18.095 [2024-12-09 23:05:33.829753] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:18.095 [2024-12-09 23:05:33.829766] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:18.095 BaseBdev1 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.095 23:05:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.035 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.299 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.299 "name": "raid_bdev1", 00:24:19.299 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:19.299 "strip_size_kb": 64, 00:24:19.299 "state": "online", 00:24:19.299 "raid_level": "raid5f", 00:24:19.299 "superblock": true, 00:24:19.299 "num_base_bdevs": 4, 00:24:19.299 "num_base_bdevs_discovered": 3, 00:24:19.299 "num_base_bdevs_operational": 3, 00:24:19.299 "base_bdevs_list": [ 00:24:19.299 { 00:24:19.299 "name": null, 00:24:19.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.299 "is_configured": false, 00:24:19.299 "data_offset": 0, 00:24:19.299 "data_size": 63488 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "name": "BaseBdev2", 00:24:19.299 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:19.299 "is_configured": true, 00:24:19.299 "data_offset": 2048, 00:24:19.299 "data_size": 63488 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "name": "BaseBdev3", 00:24:19.299 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:19.299 "is_configured": true, 00:24:19.299 "data_offset": 2048, 00:24:19.299 "data_size": 63488 00:24:19.299 }, 00:24:19.299 { 00:24:19.299 "name": "BaseBdev4", 00:24:19.299 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:19.299 "is_configured": true, 00:24:19.299 "data_offset": 2048, 00:24:19.299 "data_size": 63488 00:24:19.299 } 00:24:19.299 ] 00:24:19.299 }' 00:24:19.299 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.299 23:05:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.564 "name": "raid_bdev1", 00:24:19.564 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:19.564 "strip_size_kb": 64, 00:24:19.564 "state": "online", 00:24:19.564 "raid_level": "raid5f", 00:24:19.564 "superblock": true, 00:24:19.564 "num_base_bdevs": 4, 00:24:19.564 "num_base_bdevs_discovered": 3, 00:24:19.564 "num_base_bdevs_operational": 3, 00:24:19.564 "base_bdevs_list": [ 00:24:19.564 { 00:24:19.564 "name": null, 00:24:19.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.564 "is_configured": false, 00:24:19.564 "data_offset": 0, 00:24:19.564 "data_size": 63488 00:24:19.564 }, 00:24:19.564 { 00:24:19.564 "name": "BaseBdev2", 00:24:19.564 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:19.564 "is_configured": true, 00:24:19.564 "data_offset": 2048, 00:24:19.564 "data_size": 63488 00:24:19.564 }, 00:24:19.564 { 00:24:19.564 "name": "BaseBdev3", 00:24:19.564 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:19.564 "is_configured": true, 00:24:19.564 "data_offset": 2048, 00:24:19.564 "data_size": 63488 00:24:19.564 }, 00:24:19.564 { 00:24:19.564 "name": "BaseBdev4", 00:24:19.564 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:19.564 "is_configured": true, 00:24:19.564 "data_offset": 2048, 00:24:19.564 "data_size": 63488 00:24:19.564 } 00:24:19.564 ] 00:24:19.564 }' 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.564 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:19.823 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.824 [2024-12-09 23:05:35.428849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.824 [2024-12-09 23:05:35.429220] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:19.824 [2024-12-09 23:05:35.429343] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:19.824 request: 00:24:19.824 { 00:24:19.824 "base_bdev": "BaseBdev1", 00:24:19.824 "raid_bdev": "raid_bdev1", 00:24:19.824 "method": "bdev_raid_add_base_bdev", 00:24:19.824 "req_id": 1 00:24:19.824 } 00:24:19.824 Got JSON-RPC error response 00:24:19.824 response: 00:24:19.824 { 00:24:19.824 "code": -22, 00:24:19.824 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:19.824 } 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:19.824 23:05:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.762 "name": "raid_bdev1", 00:24:20.762 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:20.762 "strip_size_kb": 64, 00:24:20.762 "state": "online", 00:24:20.762 "raid_level": "raid5f", 00:24:20.762 "superblock": true, 00:24:20.762 "num_base_bdevs": 4, 00:24:20.762 "num_base_bdevs_discovered": 3, 00:24:20.762 "num_base_bdevs_operational": 3, 00:24:20.762 "base_bdevs_list": [ 00:24:20.762 { 00:24:20.762 "name": null, 00:24:20.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.762 "is_configured": false, 00:24:20.762 "data_offset": 0, 00:24:20.762 "data_size": 63488 00:24:20.762 }, 00:24:20.762 { 00:24:20.762 "name": "BaseBdev2", 00:24:20.762 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:20.762 "is_configured": true, 00:24:20.762 "data_offset": 2048, 00:24:20.762 "data_size": 63488 00:24:20.762 }, 00:24:20.762 { 00:24:20.762 "name": "BaseBdev3", 00:24:20.762 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:20.762 "is_configured": true, 00:24:20.762 "data_offset": 2048, 00:24:20.762 "data_size": 63488 00:24:20.762 }, 00:24:20.762 { 00:24:20.762 "name": "BaseBdev4", 00:24:20.762 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:20.762 "is_configured": true, 00:24:20.762 "data_offset": 2048, 00:24:20.762 "data_size": 63488 00:24:20.762 } 00:24:20.762 ] 00:24:20.762 }' 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.762 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.332 "name": "raid_bdev1", 00:24:21.332 "uuid": "5357fa5b-4ba4-48a2-b6fd-e50a0a4b3e75", 00:24:21.332 "strip_size_kb": 64, 00:24:21.332 "state": "online", 00:24:21.332 "raid_level": "raid5f", 00:24:21.332 "superblock": true, 00:24:21.332 "num_base_bdevs": 4, 00:24:21.332 "num_base_bdevs_discovered": 3, 00:24:21.332 "num_base_bdevs_operational": 3, 00:24:21.332 "base_bdevs_list": [ 00:24:21.332 { 00:24:21.332 "name": null, 00:24:21.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.332 "is_configured": false, 00:24:21.332 "data_offset": 0, 00:24:21.332 "data_size": 63488 00:24:21.332 }, 00:24:21.332 { 00:24:21.332 "name": "BaseBdev2", 00:24:21.332 "uuid": "4c713df6-067a-55ac-aa8c-12512ef94108", 00:24:21.332 "is_configured": true, 00:24:21.332 "data_offset": 2048, 00:24:21.332 "data_size": 63488 00:24:21.332 }, 00:24:21.332 { 00:24:21.332 "name": "BaseBdev3", 00:24:21.332 "uuid": "6eaea18c-d38b-5f9c-9cd5-01e309e5b170", 00:24:21.332 "is_configured": true, 00:24:21.332 "data_offset": 2048, 00:24:21.332 "data_size": 63488 00:24:21.332 }, 00:24:21.332 { 00:24:21.332 "name": "BaseBdev4", 00:24:21.332 "uuid": "efb40431-9a16-5edf-884a-c45219ec1d2e", 00:24:21.332 "is_configured": true, 00:24:21.332 "data_offset": 2048, 00:24:21.332 "data_size": 63488 00:24:21.332 } 00:24:21.332 ] 00:24:21.332 }' 00:24:21.332 23:05:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85829 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85829 ']' 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85829 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85829 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85829' 00:24:21.332 killing process with pid 85829 00:24:21.332 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85829 00:24:21.332 Received shutdown signal, test time was about 60.000000 seconds 00:24:21.332 00:24:21.332 Latency(us) 00:24:21.332 [2024-12-09T23:05:37.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.333 [2024-12-09T23:05:37.189Z] =================================================================================================================== 00:24:21.333 [2024-12-09T23:05:37.189Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.333 23:05:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85829 00:24:21.333 [2024-12-09 23:05:37.082548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:21.333 [2024-12-09 23:05:37.082707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:21.333 [2024-12-09 23:05:37.082818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:21.333 [2024-12-09 23:05:37.082835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:21.901 [2024-12-09 23:05:37.663418] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:23.281 23:05:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:23.281 00:24:23.282 real 0m27.630s 00:24:23.282 user 0m34.714s 00:24:23.282 sys 0m3.078s 00:24:23.282 23:05:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.282 23:05:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.282 ************************************ 00:24:23.282 END TEST raid5f_rebuild_test_sb 00:24:23.282 ************************************ 00:24:23.282 23:05:38 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:24:23.282 23:05:38 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:23.282 23:05:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:23.282 23:05:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.282 23:05:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:23.282 ************************************ 00:24:23.282 START TEST raid_state_function_test_sb_4k 00:24:23.282 ************************************ 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86647 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86647' 00:24:23.282 Process raid pid: 86647 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86647 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86647 ']' 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.282 23:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:23.282 [2024-12-09 23:05:39.020166] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:24:23.282 [2024-12-09 23:05:39.020354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.548 [2024-12-09 23:05:39.196025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.548 [2024-12-09 23:05:39.317513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.816 [2024-12-09 23:05:39.533565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:23.816 [2024-12-09 23:05:39.533683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.085 [2024-12-09 23:05:39.873781] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:24.085 [2024-12-09 23:05:39.873901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:24.085 [2024-12-09 23:05:39.873947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:24.085 [2024-12-09 23:05:39.873987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.085 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.086 "name": "Existed_Raid", 00:24:24.086 "uuid": "17df7730-f004-4d45-b039-e6cd936e3663", 00:24:24.086 "strip_size_kb": 0, 00:24:24.086 "state": "configuring", 00:24:24.086 "raid_level": "raid1", 00:24:24.086 "superblock": true, 00:24:24.086 "num_base_bdevs": 2, 00:24:24.086 "num_base_bdevs_discovered": 0, 00:24:24.086 "num_base_bdevs_operational": 2, 00:24:24.086 "base_bdevs_list": [ 00:24:24.086 { 00:24:24.086 "name": "BaseBdev1", 00:24:24.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.086 "is_configured": false, 00:24:24.086 "data_offset": 0, 00:24:24.086 "data_size": 0 00:24:24.086 }, 00:24:24.086 { 00:24:24.086 "name": "BaseBdev2", 00:24:24.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.086 "is_configured": false, 00:24:24.086 "data_offset": 0, 00:24:24.086 "data_size": 0 00:24:24.086 } 00:24:24.086 ] 00:24:24.086 }' 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.086 23:05:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.678 [2024-12-09 23:05:40.304999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:24.678 [2024-12-09 23:05:40.305033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.678 [2024-12-09 23:05:40.316983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:24.678 [2024-12-09 23:05:40.317075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:24.678 [2024-12-09 23:05:40.317089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:24.678 [2024-12-09 23:05:40.317101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.678 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.678 [2024-12-09 23:05:40.366167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:24.679 BaseBdev1 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.679 [ 00:24:24.679 { 00:24:24.679 "name": "BaseBdev1", 00:24:24.679 "aliases": [ 00:24:24.679 "8d4f40ef-316d-4930-b74c-199c55d01f93" 00:24:24.679 ], 00:24:24.679 "product_name": "Malloc disk", 00:24:24.679 "block_size": 4096, 00:24:24.679 "num_blocks": 8192, 00:24:24.679 "uuid": "8d4f40ef-316d-4930-b74c-199c55d01f93", 00:24:24.679 "assigned_rate_limits": { 00:24:24.679 "rw_ios_per_sec": 0, 00:24:24.679 "rw_mbytes_per_sec": 0, 00:24:24.679 "r_mbytes_per_sec": 0, 00:24:24.679 "w_mbytes_per_sec": 0 00:24:24.679 }, 00:24:24.679 "claimed": true, 00:24:24.679 "claim_type": "exclusive_write", 00:24:24.679 "zoned": false, 00:24:24.679 "supported_io_types": { 00:24:24.679 "read": true, 00:24:24.679 "write": true, 00:24:24.679 "unmap": true, 00:24:24.679 "flush": true, 00:24:24.679 "reset": true, 00:24:24.679 "nvme_admin": false, 00:24:24.679 "nvme_io": false, 00:24:24.679 "nvme_io_md": false, 00:24:24.679 "write_zeroes": true, 00:24:24.679 "zcopy": true, 00:24:24.679 "get_zone_info": false, 00:24:24.679 "zone_management": false, 00:24:24.679 "zone_append": false, 00:24:24.679 "compare": false, 00:24:24.679 "compare_and_write": false, 00:24:24.679 "abort": true, 00:24:24.679 "seek_hole": false, 00:24:24.679 "seek_data": false, 00:24:24.679 "copy": true, 00:24:24.679 "nvme_iov_md": false 00:24:24.679 }, 00:24:24.679 "memory_domains": [ 00:24:24.679 { 00:24:24.679 "dma_device_id": "system", 00:24:24.679 "dma_device_type": 1 00:24:24.679 }, 00:24:24.679 { 00:24:24.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.679 "dma_device_type": 2 00:24:24.679 } 00:24:24.679 ], 00:24:24.679 "driver_specific": {} 00:24:24.679 } 00:24:24.679 ] 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.679 "name": "Existed_Raid", 00:24:24.679 "uuid": "f283654c-95ec-4303-b36c-4d9f5966331d", 00:24:24.679 "strip_size_kb": 0, 00:24:24.679 "state": "configuring", 00:24:24.679 "raid_level": "raid1", 00:24:24.679 "superblock": true, 00:24:24.679 "num_base_bdevs": 2, 00:24:24.679 "num_base_bdevs_discovered": 1, 00:24:24.679 "num_base_bdevs_operational": 2, 00:24:24.679 "base_bdevs_list": [ 00:24:24.679 { 00:24:24.679 "name": "BaseBdev1", 00:24:24.679 "uuid": "8d4f40ef-316d-4930-b74c-199c55d01f93", 00:24:24.679 "is_configured": true, 00:24:24.679 "data_offset": 256, 00:24:24.679 "data_size": 7936 00:24:24.679 }, 00:24:24.679 { 00:24:24.679 "name": "BaseBdev2", 00:24:24.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.679 "is_configured": false, 00:24:24.679 "data_offset": 0, 00:24:24.679 "data_size": 0 00:24:24.679 } 00:24:24.679 ] 00:24:24.679 }' 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.679 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.954 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:24.954 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.954 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.218 [2024-12-09 23:05:40.813536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:25.218 [2024-12-09 23:05:40.813654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.218 [2024-12-09 23:05:40.825566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.218 [2024-12-09 23:05:40.827501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.218 [2024-12-09 23:05:40.827539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.218 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.219 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.219 "name": "Existed_Raid", 00:24:25.219 "uuid": "b60c45b2-b61c-4544-864f-02be895ee9ec", 00:24:25.219 "strip_size_kb": 0, 00:24:25.219 "state": "configuring", 00:24:25.219 "raid_level": "raid1", 00:24:25.219 "superblock": true, 00:24:25.219 "num_base_bdevs": 2, 00:24:25.219 "num_base_bdevs_discovered": 1, 00:24:25.219 "num_base_bdevs_operational": 2, 00:24:25.219 "base_bdevs_list": [ 00:24:25.219 { 00:24:25.219 "name": "BaseBdev1", 00:24:25.219 "uuid": "8d4f40ef-316d-4930-b74c-199c55d01f93", 00:24:25.219 "is_configured": true, 00:24:25.219 "data_offset": 256, 00:24:25.219 "data_size": 7936 00:24:25.219 }, 00:24:25.219 { 00:24:25.219 "name": "BaseBdev2", 00:24:25.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.219 "is_configured": false, 00:24:25.219 "data_offset": 0, 00:24:25.219 "data_size": 0 00:24:25.219 } 00:24:25.219 ] 00:24:25.219 }' 00:24:25.219 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.219 23:05:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.480 [2024-12-09 23:05:41.274827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:25.480 [2024-12-09 23:05:41.275230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:25.480 [2024-12-09 23:05:41.275284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:25.480 [2024-12-09 23:05:41.275604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:25.480 [2024-12-09 23:05:41.275839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:25.480 [2024-12-09 23:05:41.275892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:24:25.480 id_bdev 0x617000007e80 00:24:25.480 [2024-12-09 23:05:41.276075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.480 [ 00:24:25.480 { 00:24:25.480 "name": "BaseBdev2", 00:24:25.480 "aliases": [ 00:24:25.480 "75461a5b-22c9-453a-baaa-885689584133" 00:24:25.480 ], 00:24:25.480 "product_name": "Malloc disk", 00:24:25.480 "block_size": 4096, 00:24:25.480 "num_blocks": 8192, 00:24:25.480 "uuid": "75461a5b-22c9-453a-baaa-885689584133", 00:24:25.480 "assigned_rate_limits": { 00:24:25.480 "rw_ios_per_sec": 0, 00:24:25.480 "rw_mbytes_per_sec": 0, 00:24:25.480 "r_mbytes_per_sec": 0, 00:24:25.480 "w_mbytes_per_sec": 0 00:24:25.480 }, 00:24:25.480 "claimed": true, 00:24:25.480 "claim_type": "exclusive_write", 00:24:25.480 "zoned": false, 00:24:25.480 "supported_io_types": { 00:24:25.480 "read": true, 00:24:25.480 "write": true, 00:24:25.480 "unmap": true, 00:24:25.480 "flush": true, 00:24:25.480 "reset": true, 00:24:25.480 "nvme_admin": false, 00:24:25.480 "nvme_io": false, 00:24:25.480 "nvme_io_md": false, 00:24:25.480 "write_zeroes": true, 00:24:25.480 "zcopy": true, 00:24:25.480 "get_zone_info": false, 00:24:25.480 "zone_management": false, 00:24:25.480 "zone_append": false, 00:24:25.480 "compare": false, 00:24:25.480 "compare_and_write": false, 00:24:25.480 "abort": true, 00:24:25.480 "seek_hole": false, 00:24:25.480 "seek_data": false, 00:24:25.480 "copy": true, 00:24:25.480 "nvme_iov_md": false 00:24:25.480 }, 00:24:25.480 "memory_domains": [ 00:24:25.480 { 00:24:25.480 "dma_device_id": "system", 00:24:25.480 "dma_device_type": 1 00:24:25.480 }, 00:24:25.480 { 00:24:25.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.480 "dma_device_type": 2 00:24:25.480 } 00:24:25.480 ], 00:24:25.480 "driver_specific": {} 00:24:25.480 } 00:24:25.480 ] 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.480 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.481 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.744 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.744 "name": "Existed_Raid", 00:24:25.744 "uuid": "b60c45b2-b61c-4544-864f-02be895ee9ec", 00:24:25.744 "strip_size_kb": 0, 00:24:25.744 "state": "online", 00:24:25.744 "raid_level": "raid1", 00:24:25.744 "superblock": true, 00:24:25.744 "num_base_bdevs": 2, 00:24:25.744 "num_base_bdevs_discovered": 2, 00:24:25.744 "num_base_bdevs_operational": 2, 00:24:25.744 "base_bdevs_list": [ 00:24:25.744 { 00:24:25.744 "name": "BaseBdev1", 00:24:25.744 "uuid": "8d4f40ef-316d-4930-b74c-199c55d01f93", 00:24:25.744 "is_configured": true, 00:24:25.744 "data_offset": 256, 00:24:25.744 "data_size": 7936 00:24:25.744 }, 00:24:25.744 { 00:24:25.744 "name": "BaseBdev2", 00:24:25.744 "uuid": "75461a5b-22c9-453a-baaa-885689584133", 00:24:25.744 "is_configured": true, 00:24:25.744 "data_offset": 256, 00:24:25.744 "data_size": 7936 00:24:25.744 } 00:24:25.744 ] 00:24:25.744 }' 00:24:25.744 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.744 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:26.007 [2024-12-09 23:05:41.746400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:26.007 "name": "Existed_Raid", 00:24:26.007 "aliases": [ 00:24:26.007 "b60c45b2-b61c-4544-864f-02be895ee9ec" 00:24:26.007 ], 00:24:26.007 "product_name": "Raid Volume", 00:24:26.007 "block_size": 4096, 00:24:26.007 "num_blocks": 7936, 00:24:26.007 "uuid": "b60c45b2-b61c-4544-864f-02be895ee9ec", 00:24:26.007 "assigned_rate_limits": { 00:24:26.007 "rw_ios_per_sec": 0, 00:24:26.007 "rw_mbytes_per_sec": 0, 00:24:26.007 "r_mbytes_per_sec": 0, 00:24:26.007 "w_mbytes_per_sec": 0 00:24:26.007 }, 00:24:26.007 "claimed": false, 00:24:26.007 "zoned": false, 00:24:26.007 "supported_io_types": { 00:24:26.007 "read": true, 00:24:26.007 "write": true, 00:24:26.007 "unmap": false, 00:24:26.007 "flush": false, 00:24:26.007 "reset": true, 00:24:26.007 "nvme_admin": false, 00:24:26.007 "nvme_io": false, 00:24:26.007 "nvme_io_md": false, 00:24:26.007 "write_zeroes": true, 00:24:26.007 "zcopy": false, 00:24:26.007 "get_zone_info": false, 00:24:26.007 "zone_management": false, 00:24:26.007 "zone_append": false, 00:24:26.007 "compare": false, 00:24:26.007 "compare_and_write": false, 00:24:26.007 "abort": false, 00:24:26.007 "seek_hole": false, 00:24:26.007 "seek_data": false, 00:24:26.007 "copy": false, 00:24:26.007 "nvme_iov_md": false 00:24:26.007 }, 00:24:26.007 "memory_domains": [ 00:24:26.007 { 00:24:26.007 "dma_device_id": "system", 00:24:26.007 "dma_device_type": 1 00:24:26.007 }, 00:24:26.007 { 00:24:26.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.007 "dma_device_type": 2 00:24:26.007 }, 00:24:26.007 { 00:24:26.007 "dma_device_id": "system", 00:24:26.007 "dma_device_type": 1 00:24:26.007 }, 00:24:26.007 { 00:24:26.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.007 "dma_device_type": 2 00:24:26.007 } 00:24:26.007 ], 00:24:26.007 "driver_specific": { 00:24:26.007 "raid": { 00:24:26.007 "uuid": "b60c45b2-b61c-4544-864f-02be895ee9ec", 00:24:26.007 "strip_size_kb": 0, 00:24:26.007 "state": "online", 00:24:26.007 "raid_level": "raid1", 00:24:26.007 "superblock": true, 00:24:26.007 "num_base_bdevs": 2, 00:24:26.007 "num_base_bdevs_discovered": 2, 00:24:26.007 "num_base_bdevs_operational": 2, 00:24:26.007 "base_bdevs_list": [ 00:24:26.007 { 00:24:26.007 "name": "BaseBdev1", 00:24:26.007 "uuid": "8d4f40ef-316d-4930-b74c-199c55d01f93", 00:24:26.007 "is_configured": true, 00:24:26.007 "data_offset": 256, 00:24:26.007 "data_size": 7936 00:24:26.007 }, 00:24:26.007 { 00:24:26.007 "name": "BaseBdev2", 00:24:26.007 "uuid": "75461a5b-22c9-453a-baaa-885689584133", 00:24:26.007 "is_configured": true, 00:24:26.007 "data_offset": 256, 00:24:26.007 "data_size": 7936 00:24:26.007 } 00:24:26.007 ] 00:24:26.007 } 00:24:26.007 } 00:24:26.007 }' 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:26.007 BaseBdev2' 00:24:26.007 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.267 23:05:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 [2024-12-09 23:05:41.965826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.267 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.558 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.558 "name": "Existed_Raid", 00:24:26.558 "uuid": "b60c45b2-b61c-4544-864f-02be895ee9ec", 00:24:26.558 "strip_size_kb": 0, 00:24:26.558 "state": "online", 00:24:26.558 "raid_level": "raid1", 00:24:26.558 "superblock": true, 00:24:26.558 "num_base_bdevs": 2, 00:24:26.558 "num_base_bdevs_discovered": 1, 00:24:26.558 "num_base_bdevs_operational": 1, 00:24:26.558 "base_bdevs_list": [ 00:24:26.558 { 00:24:26.558 "name": null, 00:24:26.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.558 "is_configured": false, 00:24:26.558 "data_offset": 0, 00:24:26.558 "data_size": 7936 00:24:26.558 }, 00:24:26.558 { 00:24:26.558 "name": "BaseBdev2", 00:24:26.558 "uuid": "75461a5b-22c9-453a-baaa-885689584133", 00:24:26.558 "is_configured": true, 00:24:26.558 "data_offset": 256, 00:24:26.558 "data_size": 7936 00:24:26.558 } 00:24:26.558 ] 00:24:26.558 }' 00:24:26.558 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.558 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.893 [2024-12-09 23:05:42.570311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:26.893 [2024-12-09 23:05:42.570529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:26.893 [2024-12-09 23:05:42.680921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:26.893 [2024-12-09 23:05:42.680985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:26.893 [2024-12-09 23:05:42.680999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86647 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86647 ']' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86647 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.893 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86647 00:24:27.152 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.152 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.152 killing process with pid 86647 00:24:27.152 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86647' 00:24:27.152 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86647 00:24:27.152 [2024-12-09 23:05:42.777996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.152 23:05:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86647 00:24:27.152 [2024-12-09 23:05:42.796432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.529 23:05:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:24:28.529 00:24:28.529 real 0m5.157s 00:24:28.529 user 0m7.294s 00:24:28.529 sys 0m0.821s 00:24:28.529 ************************************ 00:24:28.529 END TEST raid_state_function_test_sb_4k 00:24:28.529 ************************************ 00:24:28.529 23:05:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.529 23:05:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.529 23:05:44 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:28.529 23:05:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:28.529 23:05:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.529 23:05:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.529 ************************************ 00:24:28.529 START TEST raid_superblock_test_4k 00:24:28.529 ************************************ 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:28.529 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86894 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86894 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86894 ']' 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.530 23:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:28.530 [2024-12-09 23:05:44.244725] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:24:28.530 [2024-12-09 23:05:44.244880] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86894 ] 00:24:28.790 [2024-12-09 23:05:44.410534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.790 [2024-12-09 23:05:44.553024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.054 [2024-12-09 23:05:44.823531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.054 [2024-12-09 23:05:44.823598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.320 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 malloc1 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 [2024-12-09 23:05:45.222544] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:29.590 [2024-12-09 23:05:45.222658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.590 [2024-12-09 23:05:45.222705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:29.590 [2024-12-09 23:05:45.222726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.590 [2024-12-09 23:05:45.226327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.590 [2024-12-09 23:05:45.226384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:29.590 pt1 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 malloc2 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 [2024-12-09 23:05:45.289064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:29.590 [2024-12-09 23:05:45.289152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.590 [2024-12-09 23:05:45.289185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:29.590 [2024-12-09 23:05:45.289198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.590 [2024-12-09 23:05:45.291942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.590 [2024-12-09 23:05:45.291981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:29.590 pt2 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.590 [2024-12-09 23:05:45.301110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:29.590 [2024-12-09 23:05:45.303389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:29.590 [2024-12-09 23:05:45.303628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:29.590 [2024-12-09 23:05:45.303647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:29.590 [2024-12-09 23:05:45.303933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:29.590 [2024-12-09 23:05:45.304128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:29.590 [2024-12-09 23:05:45.304151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:29.590 [2024-12-09 23:05:45.304323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:29.590 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.591 "name": "raid_bdev1", 00:24:29.591 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:29.591 "strip_size_kb": 0, 00:24:29.591 "state": "online", 00:24:29.591 "raid_level": "raid1", 00:24:29.591 "superblock": true, 00:24:29.591 "num_base_bdevs": 2, 00:24:29.591 "num_base_bdevs_discovered": 2, 00:24:29.591 "num_base_bdevs_operational": 2, 00:24:29.591 "base_bdevs_list": [ 00:24:29.591 { 00:24:29.591 "name": "pt1", 00:24:29.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:29.591 "is_configured": true, 00:24:29.591 "data_offset": 256, 00:24:29.591 "data_size": 7936 00:24:29.591 }, 00:24:29.591 { 00:24:29.591 "name": "pt2", 00:24:29.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:29.591 "is_configured": true, 00:24:29.591 "data_offset": 256, 00:24:29.591 "data_size": 7936 00:24:29.591 } 00:24:29.591 ] 00:24:29.591 }' 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.591 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.186 [2024-12-09 23:05:45.788789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.186 "name": "raid_bdev1", 00:24:30.186 "aliases": [ 00:24:30.186 "a5ae67ef-40a1-495c-a222-bbb16f611c84" 00:24:30.186 ], 00:24:30.186 "product_name": "Raid Volume", 00:24:30.186 "block_size": 4096, 00:24:30.186 "num_blocks": 7936, 00:24:30.186 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:30.186 "assigned_rate_limits": { 00:24:30.186 "rw_ios_per_sec": 0, 00:24:30.186 "rw_mbytes_per_sec": 0, 00:24:30.186 "r_mbytes_per_sec": 0, 00:24:30.186 "w_mbytes_per_sec": 0 00:24:30.186 }, 00:24:30.186 "claimed": false, 00:24:30.186 "zoned": false, 00:24:30.186 "supported_io_types": { 00:24:30.186 "read": true, 00:24:30.186 "write": true, 00:24:30.186 "unmap": false, 00:24:30.186 "flush": false, 00:24:30.186 "reset": true, 00:24:30.186 "nvme_admin": false, 00:24:30.186 "nvme_io": false, 00:24:30.186 "nvme_io_md": false, 00:24:30.186 "write_zeroes": true, 00:24:30.186 "zcopy": false, 00:24:30.186 "get_zone_info": false, 00:24:30.186 "zone_management": false, 00:24:30.186 "zone_append": false, 00:24:30.186 "compare": false, 00:24:30.186 "compare_and_write": false, 00:24:30.186 "abort": false, 00:24:30.186 "seek_hole": false, 00:24:30.186 "seek_data": false, 00:24:30.186 "copy": false, 00:24:30.186 "nvme_iov_md": false 00:24:30.186 }, 00:24:30.186 "memory_domains": [ 00:24:30.186 { 00:24:30.186 "dma_device_id": "system", 00:24:30.186 "dma_device_type": 1 00:24:30.186 }, 00:24:30.186 { 00:24:30.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.186 "dma_device_type": 2 00:24:30.186 }, 00:24:30.186 { 00:24:30.186 "dma_device_id": "system", 00:24:30.186 "dma_device_type": 1 00:24:30.186 }, 00:24:30.186 { 00:24:30.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.186 "dma_device_type": 2 00:24:30.186 } 00:24:30.186 ], 00:24:30.186 "driver_specific": { 00:24:30.186 "raid": { 00:24:30.186 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:30.186 "strip_size_kb": 0, 00:24:30.186 "state": "online", 00:24:30.186 "raid_level": "raid1", 00:24:30.186 "superblock": true, 00:24:30.186 "num_base_bdevs": 2, 00:24:30.186 "num_base_bdevs_discovered": 2, 00:24:30.186 "num_base_bdevs_operational": 2, 00:24:30.186 "base_bdevs_list": [ 00:24:30.186 { 00:24:30.186 "name": "pt1", 00:24:30.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:30.186 "is_configured": true, 00:24:30.186 "data_offset": 256, 00:24:30.186 "data_size": 7936 00:24:30.186 }, 00:24:30.186 { 00:24:30.186 "name": "pt2", 00:24:30.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:30.186 "is_configured": true, 00:24:30.186 "data_offset": 256, 00:24:30.186 "data_size": 7936 00:24:30.186 } 00:24:30.186 ] 00:24:30.186 } 00:24:30.186 } 00:24:30.186 }' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:30.186 pt2' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.186 23:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:30.186 [2024-12-09 23:05:46.004358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:30.186 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.465 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a5ae67ef-40a1-495c-a222-bbb16f611c84 00:24:30.465 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z a5ae67ef-40a1-495c-a222-bbb16f611c84 ']' 00:24:30.465 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:30.465 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.465 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.465 [2024-12-09 23:05:46.047922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:30.465 [2024-12-09 23:05:46.047969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:30.465 [2024-12-09 23:05:46.048082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:30.465 [2024-12-09 23:05:46.048155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:30.466 [2024-12-09 23:05:46.048170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.466 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.466 [2024-12-09 23:05:46.171798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:30.466 [2024-12-09 23:05:46.174415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:30.466 [2024-12-09 23:05:46.174522] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:30.466 [2024-12-09 23:05:46.174587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:30.466 [2024-12-09 23:05:46.174604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:30.466 [2024-12-09 23:05:46.174615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:30.466 request: 00:24:30.466 { 00:24:30.466 "name": "raid_bdev1", 00:24:30.467 "raid_level": "raid1", 00:24:30.467 "base_bdevs": [ 00:24:30.467 "malloc1", 00:24:30.467 "malloc2" 00:24:30.467 ], 00:24:30.467 "superblock": false, 00:24:30.467 "method": "bdev_raid_create", 00:24:30.467 "req_id": 1 00:24:30.467 } 00:24:30.467 Got JSON-RPC error response 00:24:30.467 response: 00:24:30.467 { 00:24:30.467 "code": -17, 00:24:30.467 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:30.467 } 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.467 [2024-12-09 23:05:46.239698] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:30.467 [2024-12-09 23:05:46.239812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.467 [2024-12-09 23:05:46.239839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:30.467 [2024-12-09 23:05:46.239855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.467 [2024-12-09 23:05:46.243187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.467 [2024-12-09 23:05:46.243244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:30.467 [2024-12-09 23:05:46.243375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:30.467 [2024-12-09 23:05:46.243478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:30.467 pt1 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.467 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.467 "name": "raid_bdev1", 00:24:30.467 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:30.467 "strip_size_kb": 0, 00:24:30.467 "state": "configuring", 00:24:30.467 "raid_level": "raid1", 00:24:30.467 "superblock": true, 00:24:30.467 "num_base_bdevs": 2, 00:24:30.467 "num_base_bdevs_discovered": 1, 00:24:30.467 "num_base_bdevs_operational": 2, 00:24:30.467 "base_bdevs_list": [ 00:24:30.467 { 00:24:30.467 "name": "pt1", 00:24:30.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:30.467 "is_configured": true, 00:24:30.467 "data_offset": 256, 00:24:30.467 "data_size": 7936 00:24:30.467 }, 00:24:30.467 { 00:24:30.467 "name": null, 00:24:30.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:30.467 "is_configured": false, 00:24:30.468 "data_offset": 256, 00:24:30.468 "data_size": 7936 00:24:30.468 } 00:24:30.468 ] 00:24:30.468 }' 00:24:30.468 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.468 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.056 [2024-12-09 23:05:46.655152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:31.056 [2024-12-09 23:05:46.655259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.056 [2024-12-09 23:05:46.655291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:31.056 [2024-12-09 23:05:46.655304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.056 [2024-12-09 23:05:46.655905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.056 [2024-12-09 23:05:46.655932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:31.056 [2024-12-09 23:05:46.656042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:31.056 [2024-12-09 23:05:46.656076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:31.056 [2024-12-09 23:05:46.656234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:31.056 [2024-12-09 23:05:46.656248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:31.056 [2024-12-09 23:05:46.656574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:31.056 [2024-12-09 23:05:46.656784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:31.056 [2024-12-09 23:05:46.656795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:31.056 [2024-12-09 23:05:46.656995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.056 pt2 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.056 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.057 "name": "raid_bdev1", 00:24:31.057 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:31.057 "strip_size_kb": 0, 00:24:31.057 "state": "online", 00:24:31.057 "raid_level": "raid1", 00:24:31.057 "superblock": true, 00:24:31.057 "num_base_bdevs": 2, 00:24:31.057 "num_base_bdevs_discovered": 2, 00:24:31.057 "num_base_bdevs_operational": 2, 00:24:31.057 "base_bdevs_list": [ 00:24:31.057 { 00:24:31.057 "name": "pt1", 00:24:31.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.057 "is_configured": true, 00:24:31.057 "data_offset": 256, 00:24:31.057 "data_size": 7936 00:24:31.057 }, 00:24:31.057 { 00:24:31.057 "name": "pt2", 00:24:31.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.057 "is_configured": true, 00:24:31.057 "data_offset": 256, 00:24:31.057 "data_size": 7936 00:24:31.057 } 00:24:31.057 ] 00:24:31.057 }' 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.057 23:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.316 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.316 [2024-12-09 23:05:47.158595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.576 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.576 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.576 "name": "raid_bdev1", 00:24:31.576 "aliases": [ 00:24:31.576 "a5ae67ef-40a1-495c-a222-bbb16f611c84" 00:24:31.576 ], 00:24:31.576 "product_name": "Raid Volume", 00:24:31.576 "block_size": 4096, 00:24:31.576 "num_blocks": 7936, 00:24:31.576 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:31.576 "assigned_rate_limits": { 00:24:31.577 "rw_ios_per_sec": 0, 00:24:31.577 "rw_mbytes_per_sec": 0, 00:24:31.577 "r_mbytes_per_sec": 0, 00:24:31.577 "w_mbytes_per_sec": 0 00:24:31.577 }, 00:24:31.577 "claimed": false, 00:24:31.577 "zoned": false, 00:24:31.577 "supported_io_types": { 00:24:31.577 "read": true, 00:24:31.577 "write": true, 00:24:31.577 "unmap": false, 00:24:31.577 "flush": false, 00:24:31.577 "reset": true, 00:24:31.577 "nvme_admin": false, 00:24:31.577 "nvme_io": false, 00:24:31.577 "nvme_io_md": false, 00:24:31.577 "write_zeroes": true, 00:24:31.577 "zcopy": false, 00:24:31.577 "get_zone_info": false, 00:24:31.577 "zone_management": false, 00:24:31.577 "zone_append": false, 00:24:31.577 "compare": false, 00:24:31.577 "compare_and_write": false, 00:24:31.577 "abort": false, 00:24:31.577 "seek_hole": false, 00:24:31.577 "seek_data": false, 00:24:31.577 "copy": false, 00:24:31.577 "nvme_iov_md": false 00:24:31.577 }, 00:24:31.577 "memory_domains": [ 00:24:31.577 { 00:24:31.577 "dma_device_id": "system", 00:24:31.577 "dma_device_type": 1 00:24:31.577 }, 00:24:31.577 { 00:24:31.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.577 "dma_device_type": 2 00:24:31.577 }, 00:24:31.577 { 00:24:31.577 "dma_device_id": "system", 00:24:31.577 "dma_device_type": 1 00:24:31.577 }, 00:24:31.577 { 00:24:31.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.577 "dma_device_type": 2 00:24:31.577 } 00:24:31.577 ], 00:24:31.577 "driver_specific": { 00:24:31.577 "raid": { 00:24:31.577 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:31.577 "strip_size_kb": 0, 00:24:31.577 "state": "online", 00:24:31.577 "raid_level": "raid1", 00:24:31.577 "superblock": true, 00:24:31.577 "num_base_bdevs": 2, 00:24:31.577 "num_base_bdevs_discovered": 2, 00:24:31.577 "num_base_bdevs_operational": 2, 00:24:31.577 "base_bdevs_list": [ 00:24:31.577 { 00:24:31.577 "name": "pt1", 00:24:31.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.577 "is_configured": true, 00:24:31.577 "data_offset": 256, 00:24:31.577 "data_size": 7936 00:24:31.577 }, 00:24:31.577 { 00:24:31.577 "name": "pt2", 00:24:31.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.577 "is_configured": true, 00:24:31.577 "data_offset": 256, 00:24:31.577 "data_size": 7936 00:24:31.577 } 00:24:31.577 ] 00:24:31.577 } 00:24:31.577 } 00:24:31.577 }' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:31.577 pt2' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:31.577 [2024-12-09 23:05:47.390216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.577 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' a5ae67ef-40a1-495c-a222-bbb16f611c84 '!=' a5ae67ef-40a1-495c-a222-bbb16f611c84 ']' 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.837 [2024-12-09 23:05:47.437913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.837 "name": "raid_bdev1", 00:24:31.837 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:31.837 "strip_size_kb": 0, 00:24:31.837 "state": "online", 00:24:31.837 "raid_level": "raid1", 00:24:31.837 "superblock": true, 00:24:31.837 "num_base_bdevs": 2, 00:24:31.837 "num_base_bdevs_discovered": 1, 00:24:31.837 "num_base_bdevs_operational": 1, 00:24:31.837 "base_bdevs_list": [ 00:24:31.837 { 00:24:31.837 "name": null, 00:24:31.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.837 "is_configured": false, 00:24:31.837 "data_offset": 0, 00:24:31.837 "data_size": 7936 00:24:31.837 }, 00:24:31.837 { 00:24:31.837 "name": "pt2", 00:24:31.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.837 "is_configured": true, 00:24:31.837 "data_offset": 256, 00:24:31.837 "data_size": 7936 00:24:31.837 } 00:24:31.837 ] 00:24:31.837 }' 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.837 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.098 [2024-12-09 23:05:47.881129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.098 [2024-12-09 23:05:47.881164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.098 [2024-12-09 23:05:47.881252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.098 [2024-12-09 23:05:47.881307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.098 [2024-12-09 23:05:47.881320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.098 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.358 [2024-12-09 23:05:47.956997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:32.358 [2024-12-09 23:05:47.957068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.358 [2024-12-09 23:05:47.957087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:32.358 [2024-12-09 23:05:47.957098] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.358 [2024-12-09 23:05:47.959443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.358 [2024-12-09 23:05:47.959495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:32.358 [2024-12-09 23:05:47.959595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:32.358 [2024-12-09 23:05:47.959652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.358 [2024-12-09 23:05:47.959772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:32.358 [2024-12-09 23:05:47.959791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:32.358 [2024-12-09 23:05:47.960054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:32.358 [2024-12-09 23:05:47.960231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:32.358 [2024-12-09 23:05:47.960246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:32.358 [2024-12-09 23:05:47.960410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.358 pt2 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.358 23:05:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.358 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.358 "name": "raid_bdev1", 00:24:32.358 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:32.358 "strip_size_kb": 0, 00:24:32.358 "state": "online", 00:24:32.358 "raid_level": "raid1", 00:24:32.358 "superblock": true, 00:24:32.358 "num_base_bdevs": 2, 00:24:32.358 "num_base_bdevs_discovered": 1, 00:24:32.358 "num_base_bdevs_operational": 1, 00:24:32.358 "base_bdevs_list": [ 00:24:32.358 { 00:24:32.358 "name": null, 00:24:32.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.358 "is_configured": false, 00:24:32.358 "data_offset": 256, 00:24:32.358 "data_size": 7936 00:24:32.358 }, 00:24:32.358 { 00:24:32.358 "name": "pt2", 00:24:32.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.358 "is_configured": true, 00:24:32.358 "data_offset": 256, 00:24:32.358 "data_size": 7936 00:24:32.358 } 00:24:32.358 ] 00:24:32.358 }' 00:24:32.358 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.358 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.618 [2024-12-09 23:05:48.416191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.618 [2024-12-09 23:05:48.416228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.618 [2024-12-09 23:05:48.416308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.618 [2024-12-09 23:05:48.416358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.618 [2024-12-09 23:05:48.416370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.618 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.939 [2024-12-09 23:05:48.476119] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:32.939 [2024-12-09 23:05:48.476187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.939 [2024-12-09 23:05:48.476232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:32.939 [2024-12-09 23:05:48.476245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.939 [2024-12-09 23:05:48.478658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.939 [2024-12-09 23:05:48.478695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:32.939 [2024-12-09 23:05:48.478791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:32.939 [2024-12-09 23:05:48.478849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:32.939 [2024-12-09 23:05:48.479024] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:32.939 [2024-12-09 23:05:48.479043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.939 [2024-12-09 23:05:48.479063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:32.939 [2024-12-09 23:05:48.479129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.939 [2024-12-09 23:05:48.479219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:32.939 [2024-12-09 23:05:48.479234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:32.939 [2024-12-09 23:05:48.479527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:32.939 [2024-12-09 23:05:48.479715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:32.939 [2024-12-09 23:05:48.479737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:32.939 [2024-12-09 23:05:48.479912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.939 pt1 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.939 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.940 "name": "raid_bdev1", 00:24:32.940 "uuid": "a5ae67ef-40a1-495c-a222-bbb16f611c84", 00:24:32.940 "strip_size_kb": 0, 00:24:32.940 "state": "online", 00:24:32.940 "raid_level": "raid1", 00:24:32.940 "superblock": true, 00:24:32.940 "num_base_bdevs": 2, 00:24:32.940 "num_base_bdevs_discovered": 1, 00:24:32.940 "num_base_bdevs_operational": 1, 00:24:32.940 "base_bdevs_list": [ 00:24:32.940 { 00:24:32.940 "name": null, 00:24:32.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.940 "is_configured": false, 00:24:32.940 "data_offset": 256, 00:24:32.940 "data_size": 7936 00:24:32.940 }, 00:24:32.940 { 00:24:32.940 "name": "pt2", 00:24:32.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.940 "is_configured": true, 00:24:32.940 "data_offset": 256, 00:24:32.940 "data_size": 7936 00:24:32.940 } 00:24:32.940 ] 00:24:32.940 }' 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.940 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.221 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:33.221 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.221 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.222 [2024-12-09 23:05:48.979552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.222 23:05:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' a5ae67ef-40a1-495c-a222-bbb16f611c84 '!=' a5ae67ef-40a1-495c-a222-bbb16f611c84 ']' 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86894 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86894 ']' 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86894 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86894 00:24:33.222 killing process with pid 86894 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86894' 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86894 00:24:33.222 [2024-12-09 23:05:49.049959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.222 [2024-12-09 23:05:49.050057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.222 [2024-12-09 23:05:49.050107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.222 [2024-12-09 23:05:49.050122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:33.222 23:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86894 00:24:33.481 [2024-12-09 23:05:49.268772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:34.861 ************************************ 00:24:34.861 END TEST raid_superblock_test_4k 00:24:34.861 ************************************ 00:24:34.861 23:05:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:24:34.861 00:24:34.861 real 0m6.414s 00:24:34.861 user 0m9.563s 00:24:34.861 sys 0m1.159s 00:24:34.861 23:05:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.861 23:05:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.861 23:05:50 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:24:34.861 23:05:50 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:34.861 23:05:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:34.861 23:05:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.861 23:05:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:34.861 ************************************ 00:24:34.861 START TEST raid_rebuild_test_sb_4k 00:24:34.861 ************************************ 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:34.861 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87224 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87224 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87224 ']' 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.862 23:05:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:35.121 Zero copy mechanism will not be used. 00:24:35.121 [2024-12-09 23:05:50.717461] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:24:35.121 [2024-12-09 23:05:50.717647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87224 ] 00:24:35.121 [2024-12-09 23:05:50.890806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.384 [2024-12-09 23:05:51.013913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.384 [2024-12-09 23:05:51.218057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.384 [2024-12-09 23:05:51.218108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.957 BaseBdev1_malloc 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.957 [2024-12-09 23:05:51.635852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:35.957 [2024-12-09 23:05:51.635916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.957 [2024-12-09 23:05:51.635938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:35.957 [2024-12-09 23:05:51.635949] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.957 [2024-12-09 23:05:51.638130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.957 [2024-12-09 23:05:51.638172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:35.957 BaseBdev1 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.957 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.957 BaseBdev2_malloc 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 [2024-12-09 23:05:51.691340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:35.958 [2024-12-09 23:05:51.691408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.958 [2024-12-09 23:05:51.691429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:35.958 [2024-12-09 23:05:51.691442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.958 [2024-12-09 23:05:51.693729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.958 [2024-12-09 23:05:51.693773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:35.958 BaseBdev2 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 spare_malloc 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 spare_delay 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 [2024-12-09 23:05:51.773445] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:35.958 [2024-12-09 23:05:51.773524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.958 [2024-12-09 23:05:51.773548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:35.958 [2024-12-09 23:05:51.773559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.958 [2024-12-09 23:05:51.775943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.958 [2024-12-09 23:05:51.775989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:35.958 spare 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 [2024-12-09 23:05:51.785508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:35.958 [2024-12-09 23:05:51.787508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:35.958 [2024-12-09 23:05:51.787714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:35.958 [2024-12-09 23:05:51.787739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:35.958 [2024-12-09 23:05:51.788027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:35.958 [2024-12-09 23:05:51.788228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:35.958 [2024-12-09 23:05:51.788246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:35.958 [2024-12-09 23:05:51.788424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.219 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.219 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:36.219 "name": "raid_bdev1", 00:24:36.219 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:36.219 "strip_size_kb": 0, 00:24:36.219 "state": "online", 00:24:36.219 "raid_level": "raid1", 00:24:36.219 "superblock": true, 00:24:36.219 "num_base_bdevs": 2, 00:24:36.219 "num_base_bdevs_discovered": 2, 00:24:36.219 "num_base_bdevs_operational": 2, 00:24:36.219 "base_bdevs_list": [ 00:24:36.219 { 00:24:36.219 "name": "BaseBdev1", 00:24:36.219 "uuid": "8922e197-bc12-59ce-a9e2-9ba0d3009ea8", 00:24:36.219 "is_configured": true, 00:24:36.219 "data_offset": 256, 00:24:36.219 "data_size": 7936 00:24:36.219 }, 00:24:36.219 { 00:24:36.219 "name": "BaseBdev2", 00:24:36.219 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:36.219 "is_configured": true, 00:24:36.219 "data_offset": 256, 00:24:36.219 "data_size": 7936 00:24:36.219 } 00:24:36.219 ] 00:24:36.219 }' 00:24:36.219 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:36.219 23:05:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.485 [2024-12-09 23:05:52.280981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.485 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:36.758 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:36.758 [2024-12-09 23:05:52.576183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:36.758 /dev/nbd0 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.017 1+0 records in 00:24:37.017 1+0 records out 00:24:37.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025184 s, 16.3 MB/s 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:37.017 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.018 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:37.018 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:37.018 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:37.018 23:05:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:37.586 7936+0 records in 00:24:37.586 7936+0 records out 00:24:37.586 32505856 bytes (33 MB, 31 MiB) copied, 0.655627 s, 49.6 MB/s 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:37.586 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:37.846 [2024-12-09 23:05:53.522667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.846 [2024-12-09 23:05:53.544026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:37.846 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:37.847 "name": "raid_bdev1", 00:24:37.847 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:37.847 "strip_size_kb": 0, 00:24:37.847 "state": "online", 00:24:37.847 "raid_level": "raid1", 00:24:37.847 "superblock": true, 00:24:37.847 "num_base_bdevs": 2, 00:24:37.847 "num_base_bdevs_discovered": 1, 00:24:37.847 "num_base_bdevs_operational": 1, 00:24:37.847 "base_bdevs_list": [ 00:24:37.847 { 00:24:37.847 "name": null, 00:24:37.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.847 "is_configured": false, 00:24:37.847 "data_offset": 0, 00:24:37.847 "data_size": 7936 00:24:37.847 }, 00:24:37.847 { 00:24:37.847 "name": "BaseBdev2", 00:24:37.847 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:37.847 "is_configured": true, 00:24:37.847 "data_offset": 256, 00:24:37.847 "data_size": 7936 00:24:37.847 } 00:24:37.847 ] 00:24:37.847 }' 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:37.847 23:05:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.417 23:05:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:38.417 23:05:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.417 23:05:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.417 [2024-12-09 23:05:54.015258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.417 [2024-12-09 23:05:54.033799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:38.417 23:05:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.417 23:05:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:38.417 [2024-12-09 23:05:54.035850] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:39.361 "name": "raid_bdev1", 00:24:39.361 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:39.361 "strip_size_kb": 0, 00:24:39.361 "state": "online", 00:24:39.361 "raid_level": "raid1", 00:24:39.361 "superblock": true, 00:24:39.361 "num_base_bdevs": 2, 00:24:39.361 "num_base_bdevs_discovered": 2, 00:24:39.361 "num_base_bdevs_operational": 2, 00:24:39.361 "process": { 00:24:39.361 "type": "rebuild", 00:24:39.361 "target": "spare", 00:24:39.361 "progress": { 00:24:39.361 "blocks": 2560, 00:24:39.361 "percent": 32 00:24:39.361 } 00:24:39.361 }, 00:24:39.361 "base_bdevs_list": [ 00:24:39.361 { 00:24:39.361 "name": "spare", 00:24:39.361 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:39.361 "is_configured": true, 00:24:39.361 "data_offset": 256, 00:24:39.361 "data_size": 7936 00:24:39.361 }, 00:24:39.361 { 00:24:39.361 "name": "BaseBdev2", 00:24:39.361 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:39.361 "is_configured": true, 00:24:39.361 "data_offset": 256, 00:24:39.361 "data_size": 7936 00:24:39.361 } 00:24:39.361 ] 00:24:39.361 }' 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:39.361 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.362 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:39.362 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.362 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:39.362 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.362 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.362 [2024-12-09 23:05:55.183502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:39.628 [2024-12-09 23:05:55.241895] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:39.628 [2024-12-09 23:05:55.241997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.628 [2024-12-09 23:05:55.242014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:39.628 [2024-12-09 23:05:55.242025] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.628 "name": "raid_bdev1", 00:24:39.628 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:39.628 "strip_size_kb": 0, 00:24:39.628 "state": "online", 00:24:39.628 "raid_level": "raid1", 00:24:39.628 "superblock": true, 00:24:39.628 "num_base_bdevs": 2, 00:24:39.628 "num_base_bdevs_discovered": 1, 00:24:39.628 "num_base_bdevs_operational": 1, 00:24:39.628 "base_bdevs_list": [ 00:24:39.628 { 00:24:39.628 "name": null, 00:24:39.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.628 "is_configured": false, 00:24:39.628 "data_offset": 0, 00:24:39.628 "data_size": 7936 00:24:39.628 }, 00:24:39.628 { 00:24:39.628 "name": "BaseBdev2", 00:24:39.628 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:39.628 "is_configured": true, 00:24:39.628 "data_offset": 256, 00:24:39.628 "data_size": 7936 00:24:39.628 } 00:24:39.628 ] 00:24:39.628 }' 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.628 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.197 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.197 "name": "raid_bdev1", 00:24:40.197 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:40.197 "strip_size_kb": 0, 00:24:40.197 "state": "online", 00:24:40.197 "raid_level": "raid1", 00:24:40.197 "superblock": true, 00:24:40.197 "num_base_bdevs": 2, 00:24:40.197 "num_base_bdevs_discovered": 1, 00:24:40.197 "num_base_bdevs_operational": 1, 00:24:40.197 "base_bdevs_list": [ 00:24:40.197 { 00:24:40.197 "name": null, 00:24:40.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.197 "is_configured": false, 00:24:40.197 "data_offset": 0, 00:24:40.197 "data_size": 7936 00:24:40.197 }, 00:24:40.197 { 00:24:40.197 "name": "BaseBdev2", 00:24:40.197 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:40.197 "is_configured": true, 00:24:40.197 "data_offset": 256, 00:24:40.198 "data_size": 7936 00:24:40.198 } 00:24:40.198 ] 00:24:40.198 }' 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.198 [2024-12-09 23:05:55.927317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.198 [2024-12-09 23:05:55.946719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.198 23:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:40.198 [2024-12-09 23:05:55.948902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.295 23:05:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.295 "name": "raid_bdev1", 00:24:41.295 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:41.295 "strip_size_kb": 0, 00:24:41.295 "state": "online", 00:24:41.295 "raid_level": "raid1", 00:24:41.295 "superblock": true, 00:24:41.295 "num_base_bdevs": 2, 00:24:41.295 "num_base_bdevs_discovered": 2, 00:24:41.295 "num_base_bdevs_operational": 2, 00:24:41.295 "process": { 00:24:41.295 "type": "rebuild", 00:24:41.295 "target": "spare", 00:24:41.295 "progress": { 00:24:41.295 "blocks": 2560, 00:24:41.295 "percent": 32 00:24:41.295 } 00:24:41.295 }, 00:24:41.295 "base_bdevs_list": [ 00:24:41.295 { 00:24:41.295 "name": "spare", 00:24:41.295 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:41.295 "is_configured": true, 00:24:41.295 "data_offset": 256, 00:24:41.295 "data_size": 7936 00:24:41.295 }, 00:24:41.295 { 00:24:41.295 "name": "BaseBdev2", 00:24:41.295 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:41.295 "is_configured": true, 00:24:41.295 "data_offset": 256, 00:24:41.295 "data_size": 7936 00:24:41.295 } 00:24:41.295 ] 00:24:41.295 }' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:41.295 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=715 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.295 "name": "raid_bdev1", 00:24:41.295 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:41.295 "strip_size_kb": 0, 00:24:41.295 "state": "online", 00:24:41.295 "raid_level": "raid1", 00:24:41.295 "superblock": true, 00:24:41.295 "num_base_bdevs": 2, 00:24:41.295 "num_base_bdevs_discovered": 2, 00:24:41.295 "num_base_bdevs_operational": 2, 00:24:41.295 "process": { 00:24:41.295 "type": "rebuild", 00:24:41.295 "target": "spare", 00:24:41.295 "progress": { 00:24:41.295 "blocks": 2816, 00:24:41.295 "percent": 35 00:24:41.295 } 00:24:41.295 }, 00:24:41.295 "base_bdevs_list": [ 00:24:41.295 { 00:24:41.295 "name": "spare", 00:24:41.295 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:41.295 "is_configured": true, 00:24:41.295 "data_offset": 256, 00:24:41.295 "data_size": 7936 00:24:41.295 }, 00:24:41.295 { 00:24:41.295 "name": "BaseBdev2", 00:24:41.295 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:41.295 "is_configured": true, 00:24:41.295 "data_offset": 256, 00:24:41.295 "data_size": 7936 00:24:41.295 } 00:24:41.295 ] 00:24:41.295 }' 00:24:41.295 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.554 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.554 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.554 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.554 23:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.495 "name": "raid_bdev1", 00:24:42.495 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:42.495 "strip_size_kb": 0, 00:24:42.495 "state": "online", 00:24:42.495 "raid_level": "raid1", 00:24:42.495 "superblock": true, 00:24:42.495 "num_base_bdevs": 2, 00:24:42.495 "num_base_bdevs_discovered": 2, 00:24:42.495 "num_base_bdevs_operational": 2, 00:24:42.495 "process": { 00:24:42.495 "type": "rebuild", 00:24:42.495 "target": "spare", 00:24:42.495 "progress": { 00:24:42.495 "blocks": 5632, 00:24:42.495 "percent": 70 00:24:42.495 } 00:24:42.495 }, 00:24:42.495 "base_bdevs_list": [ 00:24:42.495 { 00:24:42.495 "name": "spare", 00:24:42.495 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:42.495 "is_configured": true, 00:24:42.495 "data_offset": 256, 00:24:42.495 "data_size": 7936 00:24:42.495 }, 00:24:42.495 { 00:24:42.495 "name": "BaseBdev2", 00:24:42.495 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:42.495 "is_configured": true, 00:24:42.495 "data_offset": 256, 00:24:42.495 "data_size": 7936 00:24:42.495 } 00:24:42.495 ] 00:24:42.495 }' 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.495 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.755 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.755 23:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:43.325 [2024-12-09 23:05:59.064209] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:43.325 [2024-12-09 23:05:59.064307] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:43.325 [2024-12-09 23:05:59.064879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.585 "name": "raid_bdev1", 00:24:43.585 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:43.585 "strip_size_kb": 0, 00:24:43.585 "state": "online", 00:24:43.585 "raid_level": "raid1", 00:24:43.585 "superblock": true, 00:24:43.585 "num_base_bdevs": 2, 00:24:43.585 "num_base_bdevs_discovered": 2, 00:24:43.585 "num_base_bdevs_operational": 2, 00:24:43.585 "base_bdevs_list": [ 00:24:43.585 { 00:24:43.585 "name": "spare", 00:24:43.585 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:43.585 "is_configured": true, 00:24:43.585 "data_offset": 256, 00:24:43.585 "data_size": 7936 00:24:43.585 }, 00:24:43.585 { 00:24:43.585 "name": "BaseBdev2", 00:24:43.585 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:43.585 "is_configured": true, 00:24:43.585 "data_offset": 256, 00:24:43.585 "data_size": 7936 00:24:43.585 } 00:24:43.585 ] 00:24:43.585 }' 00:24:43.585 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.844 "name": "raid_bdev1", 00:24:43.844 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:43.844 "strip_size_kb": 0, 00:24:43.844 "state": "online", 00:24:43.844 "raid_level": "raid1", 00:24:43.844 "superblock": true, 00:24:43.844 "num_base_bdevs": 2, 00:24:43.844 "num_base_bdevs_discovered": 2, 00:24:43.844 "num_base_bdevs_operational": 2, 00:24:43.844 "base_bdevs_list": [ 00:24:43.844 { 00:24:43.844 "name": "spare", 00:24:43.844 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:43.844 "is_configured": true, 00:24:43.844 "data_offset": 256, 00:24:43.844 "data_size": 7936 00:24:43.844 }, 00:24:43.844 { 00:24:43.844 "name": "BaseBdev2", 00:24:43.844 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:43.844 "is_configured": true, 00:24:43.844 "data_offset": 256, 00:24:43.844 "data_size": 7936 00:24:43.844 } 00:24:43.844 ] 00:24:43.844 }' 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:43.844 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.845 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.104 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.104 "name": "raid_bdev1", 00:24:44.104 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:44.104 "strip_size_kb": 0, 00:24:44.104 "state": "online", 00:24:44.104 "raid_level": "raid1", 00:24:44.104 "superblock": true, 00:24:44.104 "num_base_bdevs": 2, 00:24:44.104 "num_base_bdevs_discovered": 2, 00:24:44.104 "num_base_bdevs_operational": 2, 00:24:44.104 "base_bdevs_list": [ 00:24:44.104 { 00:24:44.104 "name": "spare", 00:24:44.104 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:44.104 "is_configured": true, 00:24:44.104 "data_offset": 256, 00:24:44.104 "data_size": 7936 00:24:44.104 }, 00:24:44.104 { 00:24:44.104 "name": "BaseBdev2", 00:24:44.104 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:44.104 "is_configured": true, 00:24:44.104 "data_offset": 256, 00:24:44.104 "data_size": 7936 00:24:44.104 } 00:24:44.104 ] 00:24:44.104 }' 00:24:44.104 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.104 23:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.364 [2024-12-09 23:06:00.152700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.364 [2024-12-09 23:06:00.152738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:44.364 [2024-12-09 23:06:00.152822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:44.364 [2024-12-09 23:06:00.152896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:44.364 [2024-12-09 23:06:00.152909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:44.364 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:44.623 /dev/nbd0 00:24:44.623 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:44.624 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:44.884 1+0 records in 00:24:44.884 1+0 records out 00:24:44.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350477 s, 11.7 MB/s 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:44.884 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:44.884 /dev/nbd1 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.144 1+0 records in 00:24:45.144 1+0 records out 00:24:45.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003082 s, 13.3 MB/s 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.144 23:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:45.402 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.707 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.707 [2024-12-09 23:06:01.560756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:45.967 [2024-12-09 23:06:01.561258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.967 [2024-12-09 23:06:01.561310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:45.967 [2024-12-09 23:06:01.561323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.967 [2024-12-09 23:06:01.563955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.967 [2024-12-09 23:06:01.564073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:45.967 [2024-12-09 23:06:01.564240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:45.967 [2024-12-09 23:06:01.564303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:45.967 [2024-12-09 23:06:01.564489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.967 spare 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.967 [2024-12-09 23:06:01.664445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:45.967 [2024-12-09 23:06:01.664523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:45.967 [2024-12-09 23:06:01.664907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:45.967 [2024-12-09 23:06:01.665154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:45.967 [2024-12-09 23:06:01.665178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:45.967 [2024-12-09 23:06:01.665410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.967 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.968 "name": "raid_bdev1", 00:24:45.968 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:45.968 "strip_size_kb": 0, 00:24:45.968 "state": "online", 00:24:45.968 "raid_level": "raid1", 00:24:45.968 "superblock": true, 00:24:45.968 "num_base_bdevs": 2, 00:24:45.968 "num_base_bdevs_discovered": 2, 00:24:45.968 "num_base_bdevs_operational": 2, 00:24:45.968 "base_bdevs_list": [ 00:24:45.968 { 00:24:45.968 "name": "spare", 00:24:45.968 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:45.968 "is_configured": true, 00:24:45.968 "data_offset": 256, 00:24:45.968 "data_size": 7936 00:24:45.968 }, 00:24:45.968 { 00:24:45.968 "name": "BaseBdev2", 00:24:45.968 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:45.968 "is_configured": true, 00:24:45.968 "data_offset": 256, 00:24:45.968 "data_size": 7936 00:24:45.968 } 00:24:45.968 ] 00:24:45.968 }' 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.968 23:06:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:46.536 "name": "raid_bdev1", 00:24:46.536 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:46.536 "strip_size_kb": 0, 00:24:46.536 "state": "online", 00:24:46.536 "raid_level": "raid1", 00:24:46.536 "superblock": true, 00:24:46.536 "num_base_bdevs": 2, 00:24:46.536 "num_base_bdevs_discovered": 2, 00:24:46.536 "num_base_bdevs_operational": 2, 00:24:46.536 "base_bdevs_list": [ 00:24:46.536 { 00:24:46.536 "name": "spare", 00:24:46.536 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:46.536 "is_configured": true, 00:24:46.536 "data_offset": 256, 00:24:46.536 "data_size": 7936 00:24:46.536 }, 00:24:46.536 { 00:24:46.536 "name": "BaseBdev2", 00:24:46.536 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:46.536 "is_configured": true, 00:24:46.536 "data_offset": 256, 00:24:46.536 "data_size": 7936 00:24:46.536 } 00:24:46.536 ] 00:24:46.536 }' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.536 [2024-12-09 23:06:02.332754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.536 "name": "raid_bdev1", 00:24:46.536 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:46.536 "strip_size_kb": 0, 00:24:46.536 "state": "online", 00:24:46.536 "raid_level": "raid1", 00:24:46.536 "superblock": true, 00:24:46.536 "num_base_bdevs": 2, 00:24:46.536 "num_base_bdevs_discovered": 1, 00:24:46.536 "num_base_bdevs_operational": 1, 00:24:46.536 "base_bdevs_list": [ 00:24:46.536 { 00:24:46.536 "name": null, 00:24:46.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.536 "is_configured": false, 00:24:46.536 "data_offset": 0, 00:24:46.536 "data_size": 7936 00:24:46.536 }, 00:24:46.536 { 00:24:46.536 "name": "BaseBdev2", 00:24:46.536 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:46.536 "is_configured": true, 00:24:46.536 "data_offset": 256, 00:24:46.536 "data_size": 7936 00:24:46.536 } 00:24:46.536 ] 00:24:46.536 }' 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.536 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.101 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:47.101 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.101 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.101 [2024-12-09 23:06:02.828196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.101 [2024-12-09 23:06:02.828429] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:47.101 [2024-12-09 23:06:02.828448] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:47.101 [2024-12-09 23:06:02.828881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.101 [2024-12-09 23:06:02.848028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:47.101 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.101 23:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:47.101 [2024-12-09 23:06:02.850189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.039 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.298 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.298 "name": "raid_bdev1", 00:24:48.298 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:48.298 "strip_size_kb": 0, 00:24:48.298 "state": "online", 00:24:48.299 "raid_level": "raid1", 00:24:48.299 "superblock": true, 00:24:48.299 "num_base_bdevs": 2, 00:24:48.299 "num_base_bdevs_discovered": 2, 00:24:48.299 "num_base_bdevs_operational": 2, 00:24:48.299 "process": { 00:24:48.299 "type": "rebuild", 00:24:48.299 "target": "spare", 00:24:48.299 "progress": { 00:24:48.299 "blocks": 2560, 00:24:48.299 "percent": 32 00:24:48.299 } 00:24:48.299 }, 00:24:48.299 "base_bdevs_list": [ 00:24:48.299 { 00:24:48.299 "name": "spare", 00:24:48.299 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:48.299 "is_configured": true, 00:24:48.299 "data_offset": 256, 00:24:48.299 "data_size": 7936 00:24:48.299 }, 00:24:48.299 { 00:24:48.299 "name": "BaseBdev2", 00:24:48.299 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:48.299 "is_configured": true, 00:24:48.299 "data_offset": 256, 00:24:48.299 "data_size": 7936 00:24:48.299 } 00:24:48.299 ] 00:24:48.299 }' 00:24:48.299 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.299 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:48.299 23:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 [2024-12-09 23:06:04.017348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:48.299 [2024-12-09 23:06:04.056265] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:48.299 [2024-12-09 23:06:04.056759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.299 [2024-12-09 23:06:04.056791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:48.299 [2024-12-09 23:06:04.056805] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.299 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.558 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.558 "name": "raid_bdev1", 00:24:48.558 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:48.558 "strip_size_kb": 0, 00:24:48.558 "state": "online", 00:24:48.558 "raid_level": "raid1", 00:24:48.558 "superblock": true, 00:24:48.558 "num_base_bdevs": 2, 00:24:48.558 "num_base_bdevs_discovered": 1, 00:24:48.558 "num_base_bdevs_operational": 1, 00:24:48.558 "base_bdevs_list": [ 00:24:48.558 { 00:24:48.558 "name": null, 00:24:48.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.558 "is_configured": false, 00:24:48.558 "data_offset": 0, 00:24:48.558 "data_size": 7936 00:24:48.558 }, 00:24:48.558 { 00:24:48.558 "name": "BaseBdev2", 00:24:48.558 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:48.558 "is_configured": true, 00:24:48.558 "data_offset": 256, 00:24:48.558 "data_size": 7936 00:24:48.558 } 00:24:48.558 ] 00:24:48.558 }' 00:24:48.558 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.558 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.818 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:48.818 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.818 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.818 [2024-12-09 23:06:04.570128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:48.818 [2024-12-09 23:06:04.570355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.818 [2024-12-09 23:06:04.570439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:48.818 [2024-12-09 23:06:04.570529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.818 [2024-12-09 23:06:04.571102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.818 [2024-12-09 23:06:04.571142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:48.818 [2024-12-09 23:06:04.571252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:48.818 [2024-12-09 23:06:04.571273] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:48.818 [2024-12-09 23:06:04.571283] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:48.818 [2024-12-09 23:06:04.571311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:48.818 [2024-12-09 23:06:04.590230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:48.818 spare 00:24:48.818 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.818 23:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:48.818 [2024-12-09 23:06:04.592416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.774 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.033 "name": "raid_bdev1", 00:24:50.033 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:50.033 "strip_size_kb": 0, 00:24:50.033 "state": "online", 00:24:50.033 "raid_level": "raid1", 00:24:50.033 "superblock": true, 00:24:50.033 "num_base_bdevs": 2, 00:24:50.033 "num_base_bdevs_discovered": 2, 00:24:50.033 "num_base_bdevs_operational": 2, 00:24:50.033 "process": { 00:24:50.033 "type": "rebuild", 00:24:50.033 "target": "spare", 00:24:50.033 "progress": { 00:24:50.033 "blocks": 2560, 00:24:50.033 "percent": 32 00:24:50.033 } 00:24:50.033 }, 00:24:50.033 "base_bdevs_list": [ 00:24:50.033 { 00:24:50.033 "name": "spare", 00:24:50.033 "uuid": "cb46c64b-85d8-58c7-be8c-8bf58137a1b2", 00:24:50.033 "is_configured": true, 00:24:50.033 "data_offset": 256, 00:24:50.033 "data_size": 7936 00:24:50.033 }, 00:24:50.033 { 00:24:50.033 "name": "BaseBdev2", 00:24:50.033 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:50.033 "is_configured": true, 00:24:50.033 "data_offset": 256, 00:24:50.033 "data_size": 7936 00:24:50.033 } 00:24:50.033 ] 00:24:50.033 }' 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.033 [2024-12-09 23:06:05.724788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:50.033 [2024-12-09 23:06:05.798452] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:50.033 [2024-12-09 23:06:05.798540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.033 [2024-12-09 23:06:05.798560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:50.033 [2024-12-09 23:06:05.798570] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.033 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.316 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.316 "name": "raid_bdev1", 00:24:50.316 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:50.316 "strip_size_kb": 0, 00:24:50.316 "state": "online", 00:24:50.316 "raid_level": "raid1", 00:24:50.316 "superblock": true, 00:24:50.316 "num_base_bdevs": 2, 00:24:50.316 "num_base_bdevs_discovered": 1, 00:24:50.316 "num_base_bdevs_operational": 1, 00:24:50.316 "base_bdevs_list": [ 00:24:50.316 { 00:24:50.316 "name": null, 00:24:50.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.316 "is_configured": false, 00:24:50.316 "data_offset": 0, 00:24:50.316 "data_size": 7936 00:24:50.316 }, 00:24:50.316 { 00:24:50.316 "name": "BaseBdev2", 00:24:50.316 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:50.316 "is_configured": true, 00:24:50.316 "data_offset": 256, 00:24:50.316 "data_size": 7936 00:24:50.316 } 00:24:50.316 ] 00:24:50.316 }' 00:24:50.316 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.316 23:06:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.577 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.577 "name": "raid_bdev1", 00:24:50.577 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:50.577 "strip_size_kb": 0, 00:24:50.577 "state": "online", 00:24:50.577 "raid_level": "raid1", 00:24:50.577 "superblock": true, 00:24:50.577 "num_base_bdevs": 2, 00:24:50.577 "num_base_bdevs_discovered": 1, 00:24:50.577 "num_base_bdevs_operational": 1, 00:24:50.577 "base_bdevs_list": [ 00:24:50.577 { 00:24:50.577 "name": null, 00:24:50.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.577 "is_configured": false, 00:24:50.577 "data_offset": 0, 00:24:50.577 "data_size": 7936 00:24:50.578 }, 00:24:50.578 { 00:24:50.578 "name": "BaseBdev2", 00:24:50.578 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:50.578 "is_configured": true, 00:24:50.578 "data_offset": 256, 00:24:50.578 "data_size": 7936 00:24:50.578 } 00:24:50.578 ] 00:24:50.578 }' 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.578 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.836 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.836 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:50.836 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.836 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.836 [2024-12-09 23:06:06.444723] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:50.836 [2024-12-09 23:06:06.444810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.836 [2024-12-09 23:06:06.444838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:50.836 [2024-12-09 23:06:06.444859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.836 [2024-12-09 23:06:06.445406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.836 [2024-12-09 23:06:06.445440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:50.836 [2024-12-09 23:06:06.445557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:50.836 [2024-12-09 23:06:06.445579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:50.836 [2024-12-09 23:06:06.445592] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:50.836 [2024-12-09 23:06:06.445606] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:50.836 BaseBdev1 00:24:50.836 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.836 23:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.776 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.776 "name": "raid_bdev1", 00:24:51.776 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:51.776 "strip_size_kb": 0, 00:24:51.776 "state": "online", 00:24:51.776 "raid_level": "raid1", 00:24:51.776 "superblock": true, 00:24:51.776 "num_base_bdevs": 2, 00:24:51.776 "num_base_bdevs_discovered": 1, 00:24:51.776 "num_base_bdevs_operational": 1, 00:24:51.776 "base_bdevs_list": [ 00:24:51.777 { 00:24:51.777 "name": null, 00:24:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.777 "is_configured": false, 00:24:51.777 "data_offset": 0, 00:24:51.777 "data_size": 7936 00:24:51.777 }, 00:24:51.777 { 00:24:51.777 "name": "BaseBdev2", 00:24:51.777 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:51.777 "is_configured": true, 00:24:51.777 "data_offset": 256, 00:24:51.777 "data_size": 7936 00:24:51.777 } 00:24:51.777 ] 00:24:51.777 }' 00:24:51.777 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.777 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.036 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.295 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.295 "name": "raid_bdev1", 00:24:52.295 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:52.295 "strip_size_kb": 0, 00:24:52.295 "state": "online", 00:24:52.295 "raid_level": "raid1", 00:24:52.295 "superblock": true, 00:24:52.295 "num_base_bdevs": 2, 00:24:52.295 "num_base_bdevs_discovered": 1, 00:24:52.295 "num_base_bdevs_operational": 1, 00:24:52.295 "base_bdevs_list": [ 00:24:52.295 { 00:24:52.295 "name": null, 00:24:52.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.295 "is_configured": false, 00:24:52.295 "data_offset": 0, 00:24:52.295 "data_size": 7936 00:24:52.295 }, 00:24:52.295 { 00:24:52.295 "name": "BaseBdev2", 00:24:52.295 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:52.295 "is_configured": true, 00:24:52.295 "data_offset": 256, 00:24:52.295 "data_size": 7936 00:24:52.295 } 00:24:52.295 ] 00:24:52.295 }' 00:24:52.295 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.295 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:52.295 23:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.295 [2024-12-09 23:06:08.032718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:52.295 [2024-12-09 23:06:08.032918] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:52.295 [2024-12-09 23:06:08.032934] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:52.295 request: 00:24:52.295 { 00:24:52.295 "base_bdev": "BaseBdev1", 00:24:52.295 "raid_bdev": "raid_bdev1", 00:24:52.295 "method": "bdev_raid_add_base_bdev", 00:24:52.295 "req_id": 1 00:24:52.295 } 00:24:52.295 Got JSON-RPC error response 00:24:52.295 response: 00:24:52.295 { 00:24:52.295 "code": -22, 00:24:52.295 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:52.295 } 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:52.295 23:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.233 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.492 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.492 "name": "raid_bdev1", 00:24:53.492 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:53.492 "strip_size_kb": 0, 00:24:53.492 "state": "online", 00:24:53.492 "raid_level": "raid1", 00:24:53.492 "superblock": true, 00:24:53.492 "num_base_bdevs": 2, 00:24:53.492 "num_base_bdevs_discovered": 1, 00:24:53.492 "num_base_bdevs_operational": 1, 00:24:53.492 "base_bdevs_list": [ 00:24:53.492 { 00:24:53.492 "name": null, 00:24:53.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.492 "is_configured": false, 00:24:53.492 "data_offset": 0, 00:24:53.492 "data_size": 7936 00:24:53.492 }, 00:24:53.492 { 00:24:53.492 "name": "BaseBdev2", 00:24:53.492 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:53.492 "is_configured": true, 00:24:53.492 "data_offset": 256, 00:24:53.492 "data_size": 7936 00:24:53.492 } 00:24:53.492 ] 00:24:53.492 }' 00:24:53.492 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.492 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.755 "name": "raid_bdev1", 00:24:53.755 "uuid": "06f97d90-18ec-49a7-8e94-213f3f223971", 00:24:53.755 "strip_size_kb": 0, 00:24:53.755 "state": "online", 00:24:53.755 "raid_level": "raid1", 00:24:53.755 "superblock": true, 00:24:53.755 "num_base_bdevs": 2, 00:24:53.755 "num_base_bdevs_discovered": 1, 00:24:53.755 "num_base_bdevs_operational": 1, 00:24:53.755 "base_bdevs_list": [ 00:24:53.755 { 00:24:53.755 "name": null, 00:24:53.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.755 "is_configured": false, 00:24:53.755 "data_offset": 0, 00:24:53.755 "data_size": 7936 00:24:53.755 }, 00:24:53.755 { 00:24:53.755 "name": "BaseBdev2", 00:24:53.755 "uuid": "7d78e33b-b6cb-58c4-bd13-59527ffda05c", 00:24:53.755 "is_configured": true, 00:24:53.755 "data_offset": 256, 00:24:53.755 "data_size": 7936 00:24:53.755 } 00:24:53.755 ] 00:24:53.755 }' 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87224 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87224 ']' 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87224 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.755 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87224 00:24:54.014 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.014 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.014 killing process with pid 87224 00:24:54.014 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87224' 00:24:54.014 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87224 00:24:54.014 Received shutdown signal, test time was about 60.000000 seconds 00:24:54.014 00:24:54.014 Latency(us) 00:24:54.014 [2024-12-09T23:06:09.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.014 [2024-12-09T23:06:09.870Z] =================================================================================================================== 00:24:54.014 [2024-12-09T23:06:09.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:54.014 [2024-12-09 23:06:09.625350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:54.014 [2024-12-09 23:06:09.625506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.014 23:06:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87224 00:24:54.014 [2024-12-09 23:06:09.625576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.014 [2024-12-09 23:06:09.625590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:54.273 [2024-12-09 23:06:09.981508] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:55.651 23:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:24:55.651 00:24:55.651 real 0m20.589s 00:24:55.651 user 0m27.011s 00:24:55.651 sys 0m2.698s 00:24:55.651 23:06:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.651 23:06:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.651 ************************************ 00:24:55.651 END TEST raid_rebuild_test_sb_4k 00:24:55.651 ************************************ 00:24:55.651 23:06:11 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:24:55.651 23:06:11 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:24:55.651 23:06:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:55.651 23:06:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:55.651 23:06:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:55.651 ************************************ 00:24:55.651 START TEST raid_state_function_test_sb_md_separate 00:24:55.651 ************************************ 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:55.651 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87920 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87920' 00:24:55.652 Process raid pid: 87920 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87920 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87920 ']' 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.652 23:06:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.652 [2024-12-09 23:06:11.377804] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:24:55.652 [2024-12-09 23:06:11.377924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.911 [2024-12-09 23:06:11.558194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.911 [2024-12-09 23:06:11.680981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.171 [2024-12-09 23:06:11.896075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:56.171 [2024-12-09 23:06:11.896141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:56.431 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.431 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:24:56.431 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:56.431 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.431 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.432 [2024-12-09 23:06:12.231743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:56.432 [2024-12-09 23:06:12.231795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:56.432 [2024-12-09 23:06:12.231807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:56.432 [2024-12-09 23:06:12.231818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.432 "name": "Existed_Raid", 00:24:56.432 "uuid": "bb9ea38d-b2ad-46dc-bdf2-79ab040cbb75", 00:24:56.432 "strip_size_kb": 0, 00:24:56.432 "state": "configuring", 00:24:56.432 "raid_level": "raid1", 00:24:56.432 "superblock": true, 00:24:56.432 "num_base_bdevs": 2, 00:24:56.432 "num_base_bdevs_discovered": 0, 00:24:56.432 "num_base_bdevs_operational": 2, 00:24:56.432 "base_bdevs_list": [ 00:24:56.432 { 00:24:56.432 "name": "BaseBdev1", 00:24:56.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.432 "is_configured": false, 00:24:56.432 "data_offset": 0, 00:24:56.432 "data_size": 0 00:24:56.432 }, 00:24:56.432 { 00:24:56.432 "name": "BaseBdev2", 00:24:56.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.432 "is_configured": false, 00:24:56.432 "data_offset": 0, 00:24:56.432 "data_size": 0 00:24:56.432 } 00:24:56.432 ] 00:24:56.432 }' 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.432 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 [2024-12-09 23:06:12.635009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:57.000 [2024-12-09 23:06:12.635054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 [2024-12-09 23:06:12.646984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:57.000 [2024-12-09 23:06:12.647029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:57.000 [2024-12-09 23:06:12.647040] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.000 [2024-12-09 23:06:12.647054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 [2024-12-09 23:06:12.697575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.000 BaseBdev1 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.000 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 [ 00:24:57.000 { 00:24:57.000 "name": "BaseBdev1", 00:24:57.000 "aliases": [ 00:24:57.000 "121d0e94-d1e0-4746-b3e8-7d9bcfe946ff" 00:24:57.000 ], 00:24:57.000 "product_name": "Malloc disk", 00:24:57.000 "block_size": 4096, 00:24:57.000 "num_blocks": 8192, 00:24:57.000 "uuid": "121d0e94-d1e0-4746-b3e8-7d9bcfe946ff", 00:24:57.000 "md_size": 32, 00:24:57.000 "md_interleave": false, 00:24:57.000 "dif_type": 0, 00:24:57.000 "assigned_rate_limits": { 00:24:57.000 "rw_ios_per_sec": 0, 00:24:57.000 "rw_mbytes_per_sec": 0, 00:24:57.000 "r_mbytes_per_sec": 0, 00:24:57.000 "w_mbytes_per_sec": 0 00:24:57.000 }, 00:24:57.000 "claimed": true, 00:24:57.000 "claim_type": "exclusive_write", 00:24:57.000 "zoned": false, 00:24:57.000 "supported_io_types": { 00:24:57.000 "read": true, 00:24:57.000 "write": true, 00:24:57.000 "unmap": true, 00:24:57.000 "flush": true, 00:24:57.000 "reset": true, 00:24:57.000 "nvme_admin": false, 00:24:57.000 "nvme_io": false, 00:24:57.000 "nvme_io_md": false, 00:24:57.000 "write_zeroes": true, 00:24:57.000 "zcopy": true, 00:24:57.000 "get_zone_info": false, 00:24:57.000 "zone_management": false, 00:24:57.000 "zone_append": false, 00:24:57.000 "compare": false, 00:24:57.000 "compare_and_write": false, 00:24:57.000 "abort": true, 00:24:57.000 "seek_hole": false, 00:24:57.000 "seek_data": false, 00:24:57.000 "copy": true, 00:24:57.000 "nvme_iov_md": false 00:24:57.000 }, 00:24:57.001 "memory_domains": [ 00:24:57.001 { 00:24:57.001 "dma_device_id": "system", 00:24:57.001 "dma_device_type": 1 00:24:57.001 }, 00:24:57.001 { 00:24:57.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.001 "dma_device_type": 2 00:24:57.001 } 00:24:57.001 ], 00:24:57.001 "driver_specific": {} 00:24:57.001 } 00:24:57.001 ] 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.001 "name": "Existed_Raid", 00:24:57.001 "uuid": "d89ddb32-7afe-4528-b0a9-75e4457ac396", 00:24:57.001 "strip_size_kb": 0, 00:24:57.001 "state": "configuring", 00:24:57.001 "raid_level": "raid1", 00:24:57.001 "superblock": true, 00:24:57.001 "num_base_bdevs": 2, 00:24:57.001 "num_base_bdevs_discovered": 1, 00:24:57.001 "num_base_bdevs_operational": 2, 00:24:57.001 "base_bdevs_list": [ 00:24:57.001 { 00:24:57.001 "name": "BaseBdev1", 00:24:57.001 "uuid": "121d0e94-d1e0-4746-b3e8-7d9bcfe946ff", 00:24:57.001 "is_configured": true, 00:24:57.001 "data_offset": 256, 00:24:57.001 "data_size": 7936 00:24:57.001 }, 00:24:57.001 { 00:24:57.001 "name": "BaseBdev2", 00:24:57.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.001 "is_configured": false, 00:24:57.001 "data_offset": 0, 00:24:57.001 "data_size": 0 00:24:57.001 } 00:24:57.001 ] 00:24:57.001 }' 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.001 23:06:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.571 [2024-12-09 23:06:13.168871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:57.571 [2024-12-09 23:06:13.168935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.571 [2024-12-09 23:06:13.180909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.571 [2024-12-09 23:06:13.182773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:57.571 [2024-12-09 23:06:13.182819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.571 "name": "Existed_Raid", 00:24:57.571 "uuid": "3e4f7be2-6325-4a47-af3e-83c762955676", 00:24:57.571 "strip_size_kb": 0, 00:24:57.571 "state": "configuring", 00:24:57.571 "raid_level": "raid1", 00:24:57.571 "superblock": true, 00:24:57.571 "num_base_bdevs": 2, 00:24:57.571 "num_base_bdevs_discovered": 1, 00:24:57.571 "num_base_bdevs_operational": 2, 00:24:57.571 "base_bdevs_list": [ 00:24:57.571 { 00:24:57.571 "name": "BaseBdev1", 00:24:57.571 "uuid": "121d0e94-d1e0-4746-b3e8-7d9bcfe946ff", 00:24:57.571 "is_configured": true, 00:24:57.571 "data_offset": 256, 00:24:57.571 "data_size": 7936 00:24:57.571 }, 00:24:57.571 { 00:24:57.571 "name": "BaseBdev2", 00:24:57.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.571 "is_configured": false, 00:24:57.571 "data_offset": 0, 00:24:57.571 "data_size": 0 00:24:57.571 } 00:24:57.571 ] 00:24:57.571 }' 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.571 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.831 [2024-12-09 23:06:13.670563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.831 [2024-12-09 23:06:13.670848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:57.831 [2024-12-09 23:06:13.670867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:57.831 [2024-12-09 23:06:13.670955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:57.831 [2024-12-09 23:06:13.671112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:57.831 [2024-12-09 23:06:13.671136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:57.831 [2024-12-09 23:06:13.671237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.831 BaseBdev2 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.831 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.091 [ 00:24:58.091 { 00:24:58.091 "name": "BaseBdev2", 00:24:58.091 "aliases": [ 00:24:58.091 "ba2c0566-b41b-4272-99f3-9492483edd06" 00:24:58.091 ], 00:24:58.091 "product_name": "Malloc disk", 00:24:58.091 "block_size": 4096, 00:24:58.091 "num_blocks": 8192, 00:24:58.091 "uuid": "ba2c0566-b41b-4272-99f3-9492483edd06", 00:24:58.091 "md_size": 32, 00:24:58.091 "md_interleave": false, 00:24:58.091 "dif_type": 0, 00:24:58.091 "assigned_rate_limits": { 00:24:58.091 "rw_ios_per_sec": 0, 00:24:58.091 "rw_mbytes_per_sec": 0, 00:24:58.091 "r_mbytes_per_sec": 0, 00:24:58.091 "w_mbytes_per_sec": 0 00:24:58.091 }, 00:24:58.091 "claimed": true, 00:24:58.091 "claim_type": "exclusive_write", 00:24:58.091 "zoned": false, 00:24:58.091 "supported_io_types": { 00:24:58.091 "read": true, 00:24:58.091 "write": true, 00:24:58.091 "unmap": true, 00:24:58.091 "flush": true, 00:24:58.091 "reset": true, 00:24:58.091 "nvme_admin": false, 00:24:58.091 "nvme_io": false, 00:24:58.091 "nvme_io_md": false, 00:24:58.091 "write_zeroes": true, 00:24:58.091 "zcopy": true, 00:24:58.091 "get_zone_info": false, 00:24:58.091 "zone_management": false, 00:24:58.091 "zone_append": false, 00:24:58.091 "compare": false, 00:24:58.091 "compare_and_write": false, 00:24:58.091 "abort": true, 00:24:58.091 "seek_hole": false, 00:24:58.091 "seek_data": false, 00:24:58.091 "copy": true, 00:24:58.091 "nvme_iov_md": false 00:24:58.091 }, 00:24:58.091 "memory_domains": [ 00:24:58.091 { 00:24:58.091 "dma_device_id": "system", 00:24:58.091 "dma_device_type": 1 00:24:58.091 }, 00:24:58.091 { 00:24:58.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.091 "dma_device_type": 2 00:24:58.091 } 00:24:58.091 ], 00:24:58.091 "driver_specific": {} 00:24:58.091 } 00:24:58.091 ] 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.091 "name": "Existed_Raid", 00:24:58.091 "uuid": "3e4f7be2-6325-4a47-af3e-83c762955676", 00:24:58.091 "strip_size_kb": 0, 00:24:58.091 "state": "online", 00:24:58.091 "raid_level": "raid1", 00:24:58.091 "superblock": true, 00:24:58.091 "num_base_bdevs": 2, 00:24:58.091 "num_base_bdevs_discovered": 2, 00:24:58.091 "num_base_bdevs_operational": 2, 00:24:58.091 "base_bdevs_list": [ 00:24:58.091 { 00:24:58.091 "name": "BaseBdev1", 00:24:58.091 "uuid": "121d0e94-d1e0-4746-b3e8-7d9bcfe946ff", 00:24:58.091 "is_configured": true, 00:24:58.091 "data_offset": 256, 00:24:58.091 "data_size": 7936 00:24:58.091 }, 00:24:58.091 { 00:24:58.091 "name": "BaseBdev2", 00:24:58.091 "uuid": "ba2c0566-b41b-4272-99f3-9492483edd06", 00:24:58.091 "is_configured": true, 00:24:58.091 "data_offset": 256, 00:24:58.091 "data_size": 7936 00:24:58.091 } 00:24:58.091 ] 00:24:58.091 }' 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.091 23:06:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.351 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:58.351 [2024-12-09 23:06:14.198100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:58.610 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.610 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:58.610 "name": "Existed_Raid", 00:24:58.610 "aliases": [ 00:24:58.610 "3e4f7be2-6325-4a47-af3e-83c762955676" 00:24:58.610 ], 00:24:58.610 "product_name": "Raid Volume", 00:24:58.610 "block_size": 4096, 00:24:58.610 "num_blocks": 7936, 00:24:58.610 "uuid": "3e4f7be2-6325-4a47-af3e-83c762955676", 00:24:58.610 "md_size": 32, 00:24:58.610 "md_interleave": false, 00:24:58.610 "dif_type": 0, 00:24:58.610 "assigned_rate_limits": { 00:24:58.610 "rw_ios_per_sec": 0, 00:24:58.610 "rw_mbytes_per_sec": 0, 00:24:58.610 "r_mbytes_per_sec": 0, 00:24:58.610 "w_mbytes_per_sec": 0 00:24:58.610 }, 00:24:58.610 "claimed": false, 00:24:58.610 "zoned": false, 00:24:58.610 "supported_io_types": { 00:24:58.610 "read": true, 00:24:58.610 "write": true, 00:24:58.610 "unmap": false, 00:24:58.610 "flush": false, 00:24:58.610 "reset": true, 00:24:58.610 "nvme_admin": false, 00:24:58.610 "nvme_io": false, 00:24:58.610 "nvme_io_md": false, 00:24:58.610 "write_zeroes": true, 00:24:58.610 "zcopy": false, 00:24:58.610 "get_zone_info": false, 00:24:58.610 "zone_management": false, 00:24:58.610 "zone_append": false, 00:24:58.610 "compare": false, 00:24:58.610 "compare_and_write": false, 00:24:58.610 "abort": false, 00:24:58.610 "seek_hole": false, 00:24:58.610 "seek_data": false, 00:24:58.610 "copy": false, 00:24:58.610 "nvme_iov_md": false 00:24:58.610 }, 00:24:58.610 "memory_domains": [ 00:24:58.610 { 00:24:58.610 "dma_device_id": "system", 00:24:58.610 "dma_device_type": 1 00:24:58.610 }, 00:24:58.610 { 00:24:58.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.610 "dma_device_type": 2 00:24:58.610 }, 00:24:58.610 { 00:24:58.610 "dma_device_id": "system", 00:24:58.610 "dma_device_type": 1 00:24:58.610 }, 00:24:58.610 { 00:24:58.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.610 "dma_device_type": 2 00:24:58.610 } 00:24:58.610 ], 00:24:58.611 "driver_specific": { 00:24:58.611 "raid": { 00:24:58.611 "uuid": "3e4f7be2-6325-4a47-af3e-83c762955676", 00:24:58.611 "strip_size_kb": 0, 00:24:58.611 "state": "online", 00:24:58.611 "raid_level": "raid1", 00:24:58.611 "superblock": true, 00:24:58.611 "num_base_bdevs": 2, 00:24:58.611 "num_base_bdevs_discovered": 2, 00:24:58.611 "num_base_bdevs_operational": 2, 00:24:58.611 "base_bdevs_list": [ 00:24:58.611 { 00:24:58.611 "name": "BaseBdev1", 00:24:58.611 "uuid": "121d0e94-d1e0-4746-b3e8-7d9bcfe946ff", 00:24:58.611 "is_configured": true, 00:24:58.611 "data_offset": 256, 00:24:58.611 "data_size": 7936 00:24:58.611 }, 00:24:58.611 { 00:24:58.611 "name": "BaseBdev2", 00:24:58.611 "uuid": "ba2c0566-b41b-4272-99f3-9492483edd06", 00:24:58.611 "is_configured": true, 00:24:58.611 "data_offset": 256, 00:24:58.611 "data_size": 7936 00:24:58.611 } 00:24:58.611 ] 00:24:58.611 } 00:24:58.611 } 00:24:58.611 }' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:58.611 BaseBdev2' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.611 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.611 [2024-12-09 23:06:14.401500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.871 "name": "Existed_Raid", 00:24:58.871 "uuid": "3e4f7be2-6325-4a47-af3e-83c762955676", 00:24:58.871 "strip_size_kb": 0, 00:24:58.871 "state": "online", 00:24:58.871 "raid_level": "raid1", 00:24:58.871 "superblock": true, 00:24:58.871 "num_base_bdevs": 2, 00:24:58.871 "num_base_bdevs_discovered": 1, 00:24:58.871 "num_base_bdevs_operational": 1, 00:24:58.871 "base_bdevs_list": [ 00:24:58.871 { 00:24:58.871 "name": null, 00:24:58.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.871 "is_configured": false, 00:24:58.871 "data_offset": 0, 00:24:58.871 "data_size": 7936 00:24:58.871 }, 00:24:58.871 { 00:24:58.871 "name": "BaseBdev2", 00:24:58.871 "uuid": "ba2c0566-b41b-4272-99f3-9492483edd06", 00:24:58.871 "is_configured": true, 00:24:58.871 "data_offset": 256, 00:24:58.871 "data_size": 7936 00:24:58.871 } 00:24:58.871 ] 00:24:58.871 }' 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.871 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:59.156 23:06:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.418 [2024-12-09 23:06:15.032891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:59.418 [2024-12-09 23:06:15.033019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:59.418 [2024-12-09 23:06:15.143656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.418 [2024-12-09 23:06:15.143719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.418 [2024-12-09 23:06:15.143733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87920 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87920 ']' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87920 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87920 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.418 killing process with pid 87920 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87920' 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87920 00:24:59.418 [2024-12-09 23:06:15.245203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.418 23:06:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87920 00:24:59.418 [2024-12-09 23:06:15.265324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.811 23:06:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:25:00.811 00:25:00.811 real 0m5.224s 00:25:00.811 user 0m7.403s 00:25:00.811 sys 0m0.905s 00:25:00.811 23:06:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.811 23:06:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.811 ************************************ 00:25:00.811 END TEST raid_state_function_test_sb_md_separate 00:25:00.811 ************************************ 00:25:00.811 23:06:16 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:25:00.811 23:06:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:00.811 23:06:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.811 23:06:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.811 ************************************ 00:25:00.811 START TEST raid_superblock_test_md_separate 00:25:00.811 ************************************ 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88175 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88175 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88175 ']' 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.811 23:06:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 [2024-12-09 23:06:16.675614] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:25:01.076 [2024-12-09 23:06:16.675743] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88175 ] 00:25:01.076 [2024-12-09 23:06:16.854314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.335 [2024-12-09 23:06:16.980489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.594 [2024-12-09 23:06:17.195960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.594 [2024-12-09 23:06:17.196005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.854 malloc1 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.854 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:01.855 [2024-12-09 23:06:17.674165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:01.855 [2024-12-09 23:06:17.674242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.855 [2024-12-09 23:06:17.674266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:01.855 [2024-12-09 23:06:17.674277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.855 [2024-12-09 23:06:17.676322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.855 [2024-12-09 23:06:17.676360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:01.855 pt1 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.855 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 malloc2 00:25:02.115 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.115 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:02.115 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.115 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 [2024-12-09 23:06:17.736885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:02.115 [2024-12-09 23:06:17.736948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.115 [2024-12-09 23:06:17.736973] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:02.115 [2024-12-09 23:06:17.736986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.115 [2024-12-09 23:06:17.739239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.116 [2024-12-09 23:06:17.739274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:02.116 pt2 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.116 [2024-12-09 23:06:17.748888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:02.116 [2024-12-09 23:06:17.750999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:02.116 [2024-12-09 23:06:17.751207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:02.116 [2024-12-09 23:06:17.751232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:02.116 [2024-12-09 23:06:17.751320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:02.116 [2024-12-09 23:06:17.751478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:02.116 [2024-12-09 23:06:17.751498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:02.116 [2024-12-09 23:06:17.751654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.116 "name": "raid_bdev1", 00:25:02.116 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:02.116 "strip_size_kb": 0, 00:25:02.116 "state": "online", 00:25:02.116 "raid_level": "raid1", 00:25:02.116 "superblock": true, 00:25:02.116 "num_base_bdevs": 2, 00:25:02.116 "num_base_bdevs_discovered": 2, 00:25:02.116 "num_base_bdevs_operational": 2, 00:25:02.116 "base_bdevs_list": [ 00:25:02.116 { 00:25:02.116 "name": "pt1", 00:25:02.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.116 "is_configured": true, 00:25:02.116 "data_offset": 256, 00:25:02.116 "data_size": 7936 00:25:02.116 }, 00:25:02.116 { 00:25:02.116 "name": "pt2", 00:25:02.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.116 "is_configured": true, 00:25:02.116 "data_offset": 256, 00:25:02.116 "data_size": 7936 00:25:02.116 } 00:25:02.116 ] 00:25:02.116 }' 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.116 23:06:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:02.375 [2024-12-09 23:06:18.164857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.375 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:02.375 "name": "raid_bdev1", 00:25:02.375 "aliases": [ 00:25:02.375 "d7fa9c3e-3437-428d-9e61-dfbd17589cee" 00:25:02.375 ], 00:25:02.375 "product_name": "Raid Volume", 00:25:02.375 "block_size": 4096, 00:25:02.375 "num_blocks": 7936, 00:25:02.375 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:02.375 "md_size": 32, 00:25:02.375 "md_interleave": false, 00:25:02.375 "dif_type": 0, 00:25:02.375 "assigned_rate_limits": { 00:25:02.375 "rw_ios_per_sec": 0, 00:25:02.375 "rw_mbytes_per_sec": 0, 00:25:02.375 "r_mbytes_per_sec": 0, 00:25:02.375 "w_mbytes_per_sec": 0 00:25:02.375 }, 00:25:02.376 "claimed": false, 00:25:02.376 "zoned": false, 00:25:02.376 "supported_io_types": { 00:25:02.376 "read": true, 00:25:02.376 "write": true, 00:25:02.376 "unmap": false, 00:25:02.376 "flush": false, 00:25:02.376 "reset": true, 00:25:02.376 "nvme_admin": false, 00:25:02.376 "nvme_io": false, 00:25:02.376 "nvme_io_md": false, 00:25:02.376 "write_zeroes": true, 00:25:02.376 "zcopy": false, 00:25:02.376 "get_zone_info": false, 00:25:02.376 "zone_management": false, 00:25:02.376 "zone_append": false, 00:25:02.376 "compare": false, 00:25:02.376 "compare_and_write": false, 00:25:02.376 "abort": false, 00:25:02.376 "seek_hole": false, 00:25:02.376 "seek_data": false, 00:25:02.376 "copy": false, 00:25:02.376 "nvme_iov_md": false 00:25:02.376 }, 00:25:02.376 "memory_domains": [ 00:25:02.376 { 00:25:02.376 "dma_device_id": "system", 00:25:02.376 "dma_device_type": 1 00:25:02.376 }, 00:25:02.376 { 00:25:02.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.376 "dma_device_type": 2 00:25:02.376 }, 00:25:02.376 { 00:25:02.376 "dma_device_id": "system", 00:25:02.376 "dma_device_type": 1 00:25:02.376 }, 00:25:02.376 { 00:25:02.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.376 "dma_device_type": 2 00:25:02.376 } 00:25:02.376 ], 00:25:02.376 "driver_specific": { 00:25:02.376 "raid": { 00:25:02.376 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:02.376 "strip_size_kb": 0, 00:25:02.376 "state": "online", 00:25:02.376 "raid_level": "raid1", 00:25:02.376 "superblock": true, 00:25:02.376 "num_base_bdevs": 2, 00:25:02.376 "num_base_bdevs_discovered": 2, 00:25:02.376 "num_base_bdevs_operational": 2, 00:25:02.376 "base_bdevs_list": [ 00:25:02.376 { 00:25:02.376 "name": "pt1", 00:25:02.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.376 "is_configured": true, 00:25:02.376 "data_offset": 256, 00:25:02.376 "data_size": 7936 00:25:02.376 }, 00:25:02.376 { 00:25:02.376 "name": "pt2", 00:25:02.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.376 "is_configured": true, 00:25:02.376 "data_offset": 256, 00:25:02.376 "data_size": 7936 00:25:02.376 } 00:25:02.376 ] 00:25:02.376 } 00:25:02.376 } 00:25:02.376 }' 00:25:02.376 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:02.635 pt2' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.635 [2024-12-09 23:06:18.388384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d7fa9c3e-3437-428d-9e61-dfbd17589cee 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z d7fa9c3e-3437-428d-9e61-dfbd17589cee ']' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.635 [2024-12-09 23:06:18.416074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.635 [2024-12-09 23:06:18.416106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:02.635 [2024-12-09 23:06:18.416196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:02.635 [2024-12-09 23:06:18.416260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:02.635 [2024-12-09 23:06:18.416274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.635 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.900 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.900 [2024-12-09 23:06:18.563881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:02.900 [2024-12-09 23:06:18.566013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:02.900 [2024-12-09 23:06:18.566104] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:02.900 [2024-12-09 23:06:18.566177] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:02.900 [2024-12-09 23:06:18.566193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.900 [2024-12-09 23:06:18.566205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:02.900 request: 00:25:02.900 { 00:25:02.900 "name": "raid_bdev1", 00:25:02.900 "raid_level": "raid1", 00:25:02.900 "base_bdevs": [ 00:25:02.900 "malloc1", 00:25:02.900 "malloc2" 00:25:02.900 ], 00:25:02.900 "superblock": false, 00:25:02.900 "method": "bdev_raid_create", 00:25:02.900 "req_id": 1 00:25:02.900 } 00:25:02.900 Got JSON-RPC error response 00:25:02.900 response: 00:25:02.901 { 00:25:02.901 "code": -17, 00:25:02.901 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:02.901 } 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.901 [2024-12-09 23:06:18.631739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:02.901 [2024-12-09 23:06:18.631810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.901 [2024-12-09 23:06:18.631832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:02.901 [2024-12-09 23:06:18.631845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.901 [2024-12-09 23:06:18.634177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.901 [2024-12-09 23:06:18.634219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:02.901 [2024-12-09 23:06:18.634284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:02.901 [2024-12-09 23:06:18.634347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:02.901 pt1 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.901 "name": "raid_bdev1", 00:25:02.901 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:02.901 "strip_size_kb": 0, 00:25:02.901 "state": "configuring", 00:25:02.901 "raid_level": "raid1", 00:25:02.901 "superblock": true, 00:25:02.901 "num_base_bdevs": 2, 00:25:02.901 "num_base_bdevs_discovered": 1, 00:25:02.901 "num_base_bdevs_operational": 2, 00:25:02.901 "base_bdevs_list": [ 00:25:02.901 { 00:25:02.901 "name": "pt1", 00:25:02.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.901 "is_configured": true, 00:25:02.901 "data_offset": 256, 00:25:02.901 "data_size": 7936 00:25:02.901 }, 00:25:02.901 { 00:25:02.901 "name": null, 00:25:02.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.901 "is_configured": false, 00:25:02.901 "data_offset": 256, 00:25:02.901 "data_size": 7936 00:25:02.901 } 00:25:02.901 ] 00:25:02.901 }' 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.901 23:06:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.475 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:03.475 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:03.475 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:03.475 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:03.475 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.475 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.475 [2024-12-09 23:06:19.106921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:03.475 [2024-12-09 23:06:19.107011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.475 [2024-12-09 23:06:19.107036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:03.475 [2024-12-09 23:06:19.107049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.476 [2024-12-09 23:06:19.107317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.476 [2024-12-09 23:06:19.107343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:03.476 [2024-12-09 23:06:19.107404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:03.476 [2024-12-09 23:06:19.107430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:03.476 [2024-12-09 23:06:19.107575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:03.476 [2024-12-09 23:06:19.107589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:03.476 [2024-12-09 23:06:19.107678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:03.476 [2024-12-09 23:06:19.107808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:03.476 [2024-12-09 23:06:19.107818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:03.476 [2024-12-09 23:06:19.107931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.476 pt2 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.476 "name": "raid_bdev1", 00:25:03.476 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:03.476 "strip_size_kb": 0, 00:25:03.476 "state": "online", 00:25:03.476 "raid_level": "raid1", 00:25:03.476 "superblock": true, 00:25:03.476 "num_base_bdevs": 2, 00:25:03.476 "num_base_bdevs_discovered": 2, 00:25:03.476 "num_base_bdevs_operational": 2, 00:25:03.476 "base_bdevs_list": [ 00:25:03.476 { 00:25:03.476 "name": "pt1", 00:25:03.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.476 "is_configured": true, 00:25:03.476 "data_offset": 256, 00:25:03.476 "data_size": 7936 00:25:03.476 }, 00:25:03.476 { 00:25:03.476 "name": "pt2", 00:25:03.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.476 "is_configured": true, 00:25:03.476 "data_offset": 256, 00:25:03.476 "data_size": 7936 00:25:03.476 } 00:25:03.476 ] 00:25:03.476 }' 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.476 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.736 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:03.736 [2024-12-09 23:06:19.582486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.997 "name": "raid_bdev1", 00:25:03.997 "aliases": [ 00:25:03.997 "d7fa9c3e-3437-428d-9e61-dfbd17589cee" 00:25:03.997 ], 00:25:03.997 "product_name": "Raid Volume", 00:25:03.997 "block_size": 4096, 00:25:03.997 "num_blocks": 7936, 00:25:03.997 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:03.997 "md_size": 32, 00:25:03.997 "md_interleave": false, 00:25:03.997 "dif_type": 0, 00:25:03.997 "assigned_rate_limits": { 00:25:03.997 "rw_ios_per_sec": 0, 00:25:03.997 "rw_mbytes_per_sec": 0, 00:25:03.997 "r_mbytes_per_sec": 0, 00:25:03.997 "w_mbytes_per_sec": 0 00:25:03.997 }, 00:25:03.997 "claimed": false, 00:25:03.997 "zoned": false, 00:25:03.997 "supported_io_types": { 00:25:03.997 "read": true, 00:25:03.997 "write": true, 00:25:03.997 "unmap": false, 00:25:03.997 "flush": false, 00:25:03.997 "reset": true, 00:25:03.997 "nvme_admin": false, 00:25:03.997 "nvme_io": false, 00:25:03.997 "nvme_io_md": false, 00:25:03.997 "write_zeroes": true, 00:25:03.997 "zcopy": false, 00:25:03.997 "get_zone_info": false, 00:25:03.997 "zone_management": false, 00:25:03.997 "zone_append": false, 00:25:03.997 "compare": false, 00:25:03.997 "compare_and_write": false, 00:25:03.997 "abort": false, 00:25:03.997 "seek_hole": false, 00:25:03.997 "seek_data": false, 00:25:03.997 "copy": false, 00:25:03.997 "nvme_iov_md": false 00:25:03.997 }, 00:25:03.997 "memory_domains": [ 00:25:03.997 { 00:25:03.997 "dma_device_id": "system", 00:25:03.997 "dma_device_type": 1 00:25:03.997 }, 00:25:03.997 { 00:25:03.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.997 "dma_device_type": 2 00:25:03.997 }, 00:25:03.997 { 00:25:03.997 "dma_device_id": "system", 00:25:03.997 "dma_device_type": 1 00:25:03.997 }, 00:25:03.997 { 00:25:03.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.997 "dma_device_type": 2 00:25:03.997 } 00:25:03.997 ], 00:25:03.997 "driver_specific": { 00:25:03.997 "raid": { 00:25:03.997 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:03.997 "strip_size_kb": 0, 00:25:03.997 "state": "online", 00:25:03.997 "raid_level": "raid1", 00:25:03.997 "superblock": true, 00:25:03.997 "num_base_bdevs": 2, 00:25:03.997 "num_base_bdevs_discovered": 2, 00:25:03.997 "num_base_bdevs_operational": 2, 00:25:03.997 "base_bdevs_list": [ 00:25:03.997 { 00:25:03.997 "name": "pt1", 00:25:03.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.997 "is_configured": true, 00:25:03.997 "data_offset": 256, 00:25:03.997 "data_size": 7936 00:25:03.997 }, 00:25:03.997 { 00:25:03.997 "name": "pt2", 00:25:03.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.997 "is_configured": true, 00:25:03.997 "data_offset": 256, 00:25:03.997 "data_size": 7936 00:25:03.997 } 00:25:03.997 ] 00:25:03.997 } 00:25:03.997 } 00:25:03.997 }' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:03.997 pt2' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.997 [2024-12-09 23:06:19.810057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' d7fa9c3e-3437-428d-9e61-dfbd17589cee '!=' d7fa9c3e-3437-428d-9e61-dfbd17589cee ']' 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.997 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:03.997 [2024-12-09 23:06:19.849748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.257 "name": "raid_bdev1", 00:25:04.257 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:04.257 "strip_size_kb": 0, 00:25:04.257 "state": "online", 00:25:04.257 "raid_level": "raid1", 00:25:04.257 "superblock": true, 00:25:04.257 "num_base_bdevs": 2, 00:25:04.257 "num_base_bdevs_discovered": 1, 00:25:04.257 "num_base_bdevs_operational": 1, 00:25:04.257 "base_bdevs_list": [ 00:25:04.257 { 00:25:04.257 "name": null, 00:25:04.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.257 "is_configured": false, 00:25:04.257 "data_offset": 0, 00:25:04.257 "data_size": 7936 00:25:04.257 }, 00:25:04.257 { 00:25:04.257 "name": "pt2", 00:25:04.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:04.257 "is_configured": true, 00:25:04.257 "data_offset": 256, 00:25:04.257 "data_size": 7936 00:25:04.257 } 00:25:04.257 ] 00:25:04.257 }' 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.257 23:06:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 [2024-12-09 23:06:20.285048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:04.516 [2024-12-09 23:06:20.285090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.516 [2024-12-09 23:06:20.285195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.516 [2024-12-09 23:06:20.285256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.516 [2024-12-09 23:06:20.285293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.516 [2024-12-09 23:06:20.348946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:04.516 [2024-12-09 23:06:20.349021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.516 [2024-12-09 23:06:20.349042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:04.516 [2024-12-09 23:06:20.349056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.516 [2024-12-09 23:06:20.351640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.516 [2024-12-09 23:06:20.351688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:04.516 [2024-12-09 23:06:20.351756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:04.516 [2024-12-09 23:06:20.351814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:04.516 [2024-12-09 23:06:20.351928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:04.516 [2024-12-09 23:06:20.351943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:04.516 [2024-12-09 23:06:20.352034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:04.516 [2024-12-09 23:06:20.352174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:04.516 [2024-12-09 23:06:20.352193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:04.516 [2024-12-09 23:06:20.352310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.516 pt2 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.516 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:04.779 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.779 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.779 "name": "raid_bdev1", 00:25:04.779 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:04.779 "strip_size_kb": 0, 00:25:04.779 "state": "online", 00:25:04.779 "raid_level": "raid1", 00:25:04.779 "superblock": true, 00:25:04.779 "num_base_bdevs": 2, 00:25:04.779 "num_base_bdevs_discovered": 1, 00:25:04.779 "num_base_bdevs_operational": 1, 00:25:04.779 "base_bdevs_list": [ 00:25:04.779 { 00:25:04.779 "name": null, 00:25:04.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.779 "is_configured": false, 00:25:04.779 "data_offset": 256, 00:25:04.779 "data_size": 7936 00:25:04.779 }, 00:25:04.779 { 00:25:04.779 "name": "pt2", 00:25:04.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:04.779 "is_configured": true, 00:25:04.779 "data_offset": 256, 00:25:04.779 "data_size": 7936 00:25:04.779 } 00:25:04.779 ] 00:25:04.779 }' 00:25:04.779 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.779 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.039 [2024-12-09 23:06:20.816191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.039 [2024-12-09 23:06:20.816237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:05.039 [2024-12-09 23:06:20.816331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.039 [2024-12-09 23:06:20.816401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.039 [2024-12-09 23:06:20.816414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.039 [2024-12-09 23:06:20.876135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:05.039 [2024-12-09 23:06:20.876208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.039 [2024-12-09 23:06:20.876234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:05.039 [2024-12-09 23:06:20.876246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.039 [2024-12-09 23:06:20.878683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.039 [2024-12-09 23:06:20.878720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:05.039 [2024-12-09 23:06:20.878790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:05.039 [2024-12-09 23:06:20.878843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:05.039 [2024-12-09 23:06:20.878995] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:05.039 [2024-12-09 23:06:20.879007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.039 [2024-12-09 23:06:20.879030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:05.039 [2024-12-09 23:06:20.879120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:05.039 [2024-12-09 23:06:20.879212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:05.039 [2024-12-09 23:06:20.879228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:05.039 [2024-12-09 23:06:20.879303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:05.039 [2024-12-09 23:06:20.879442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:05.039 [2024-12-09 23:06:20.879471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:05.039 [2024-12-09 23:06:20.879608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.039 pt1 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.039 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.306 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.306 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.306 "name": "raid_bdev1", 00:25:05.306 "uuid": "d7fa9c3e-3437-428d-9e61-dfbd17589cee", 00:25:05.306 "strip_size_kb": 0, 00:25:05.306 "state": "online", 00:25:05.306 "raid_level": "raid1", 00:25:05.306 "superblock": true, 00:25:05.306 "num_base_bdevs": 2, 00:25:05.306 "num_base_bdevs_discovered": 1, 00:25:05.306 "num_base_bdevs_operational": 1, 00:25:05.306 "base_bdevs_list": [ 00:25:05.306 { 00:25:05.306 "name": null, 00:25:05.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.306 "is_configured": false, 00:25:05.306 "data_offset": 256, 00:25:05.306 "data_size": 7936 00:25:05.306 }, 00:25:05.306 { 00:25:05.306 "name": "pt2", 00:25:05.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:05.306 "is_configured": true, 00:25:05.306 "data_offset": 256, 00:25:05.306 "data_size": 7936 00:25:05.306 } 00:25:05.306 ] 00:25:05.306 }' 00:25:05.306 23:06:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.306 23:06:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.567 [2024-12-09 23:06:21.395551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.567 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' d7fa9c3e-3437-428d-9e61-dfbd17589cee '!=' d7fa9c3e-3437-428d-9e61-dfbd17589cee ']' 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88175 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88175 ']' 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88175 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88175 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.827 killing process with pid 88175 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88175' 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88175 00:25:05.827 [2024-12-09 23:06:21.479125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:05.827 [2024-12-09 23:06:21.479241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.827 23:06:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88175 00:25:05.827 [2024-12-09 23:06:21.479299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.827 [2024-12-09 23:06:21.479319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:06.086 [2024-12-09 23:06:21.747248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:07.465 23:06:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:25:07.465 00:25:07.465 real 0m6.477s 00:25:07.465 user 0m9.740s 00:25:07.465 sys 0m1.098s 00:25:07.465 23:06:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.465 23:06:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.465 ************************************ 00:25:07.465 END TEST raid_superblock_test_md_separate 00:25:07.465 ************************************ 00:25:07.465 23:06:23 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:25:07.465 23:06:23 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:25:07.465 23:06:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:07.465 23:06:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.465 23:06:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:07.465 ************************************ 00:25:07.465 START TEST raid_rebuild_test_sb_md_separate 00:25:07.465 ************************************ 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88503 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88503 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88503 ']' 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.465 23:06:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.465 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:07.465 Zero copy mechanism will not be used. 00:25:07.465 [2024-12-09 23:06:23.249137] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:25:07.465 [2024-12-09 23:06:23.249320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88503 ] 00:25:07.726 [2024-12-09 23:06:23.441901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.726 [2024-12-09 23:06:23.569444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.988 [2024-12-09 23:06:23.800960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.988 [2024-12-09 23:06:23.801033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.555 BaseBdev1_malloc 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.555 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.555 [2024-12-09 23:06:24.210306] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:08.555 [2024-12-09 23:06:24.210373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.556 [2024-12-09 23:06:24.210399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:08.556 [2024-12-09 23:06:24.210412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.556 [2024-12-09 23:06:24.212693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.556 [2024-12-09 23:06:24.212733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:08.556 BaseBdev1 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 BaseBdev2_malloc 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 [2024-12-09 23:06:24.273075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:08.556 [2024-12-09 23:06:24.273143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.556 [2024-12-09 23:06:24.273167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:08.556 [2024-12-09 23:06:24.273181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.556 [2024-12-09 23:06:24.275271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.556 [2024-12-09 23:06:24.275311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:08.556 BaseBdev2 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 spare_malloc 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 spare_delay 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 [2024-12-09 23:06:24.356222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:08.556 [2024-12-09 23:06:24.356286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.556 [2024-12-09 23:06:24.356314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:08.556 [2024-12-09 23:06:24.356325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.556 [2024-12-09 23:06:24.358588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.556 [2024-12-09 23:06:24.358626] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:08.556 spare 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 [2024-12-09 23:06:24.368282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.556 [2024-12-09 23:06:24.370336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.556 [2024-12-09 23:06:24.370581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:08.556 [2024-12-09 23:06:24.370601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:08.556 [2024-12-09 23:06:24.370707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:08.556 [2024-12-09 23:06:24.370879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:08.556 [2024-12-09 23:06:24.370897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:08.556 [2024-12-09 23:06:24.371033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.556 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.815 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.815 "name": "raid_bdev1", 00:25:08.815 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:08.815 "strip_size_kb": 0, 00:25:08.815 "state": "online", 00:25:08.815 "raid_level": "raid1", 00:25:08.815 "superblock": true, 00:25:08.815 "num_base_bdevs": 2, 00:25:08.815 "num_base_bdevs_discovered": 2, 00:25:08.815 "num_base_bdevs_operational": 2, 00:25:08.815 "base_bdevs_list": [ 00:25:08.815 { 00:25:08.815 "name": "BaseBdev1", 00:25:08.815 "uuid": "8f402a32-7778-5efe-9e62-2aa28b55c337", 00:25:08.815 "is_configured": true, 00:25:08.815 "data_offset": 256, 00:25:08.815 "data_size": 7936 00:25:08.815 }, 00:25:08.815 { 00:25:08.815 "name": "BaseBdev2", 00:25:08.815 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:08.815 "is_configured": true, 00:25:08.815 "data_offset": 256, 00:25:08.815 "data_size": 7936 00:25:08.815 } 00:25:08.815 ] 00:25:08.815 }' 00:25:08.815 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.815 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.075 [2024-12-09 23:06:24.827961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:09.075 23:06:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:09.338 [2024-12-09 23:06:25.139104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:09.338 /dev/nbd0 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:09.338 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:09.338 1+0 records in 00:25:09.338 1+0 records out 00:25:09.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488894 s, 8.4 MB/s 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:09.601 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:10.171 7936+0 records in 00:25:10.171 7936+0 records out 00:25:10.171 32505856 bytes (33 MB, 31 MiB) copied, 0.712187 s, 45.6 MB/s 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:10.171 23:06:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:10.431 [2024-12-09 23:06:26.152244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.431 [2024-12-09 23:06:26.192280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.431 "name": "raid_bdev1", 00:25:10.431 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:10.431 "strip_size_kb": 0, 00:25:10.431 "state": "online", 00:25:10.431 "raid_level": "raid1", 00:25:10.431 "superblock": true, 00:25:10.431 "num_base_bdevs": 2, 00:25:10.431 "num_base_bdevs_discovered": 1, 00:25:10.431 "num_base_bdevs_operational": 1, 00:25:10.431 "base_bdevs_list": [ 00:25:10.431 { 00:25:10.431 "name": null, 00:25:10.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.431 "is_configured": false, 00:25:10.431 "data_offset": 0, 00:25:10.431 "data_size": 7936 00:25:10.431 }, 00:25:10.431 { 00:25:10.431 "name": "BaseBdev2", 00:25:10.431 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:10.431 "is_configured": true, 00:25:10.431 "data_offset": 256, 00:25:10.431 "data_size": 7936 00:25:10.431 } 00:25:10.431 ] 00:25:10.431 }' 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.431 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.999 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:10.999 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.999 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.999 [2024-12-09 23:06:26.599605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.999 [2024-12-09 23:06:26.617707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:25:10.999 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.999 23:06:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:10.999 [2024-12-09 23:06:26.619894] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:11.936 "name": "raid_bdev1", 00:25:11.936 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:11.936 "strip_size_kb": 0, 00:25:11.936 "state": "online", 00:25:11.936 "raid_level": "raid1", 00:25:11.936 "superblock": true, 00:25:11.936 "num_base_bdevs": 2, 00:25:11.936 "num_base_bdevs_discovered": 2, 00:25:11.936 "num_base_bdevs_operational": 2, 00:25:11.936 "process": { 00:25:11.936 "type": "rebuild", 00:25:11.936 "target": "spare", 00:25:11.936 "progress": { 00:25:11.936 "blocks": 2560, 00:25:11.936 "percent": 32 00:25:11.936 } 00:25:11.936 }, 00:25:11.936 "base_bdevs_list": [ 00:25:11.936 { 00:25:11.936 "name": "spare", 00:25:11.936 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:11.936 "is_configured": true, 00:25:11.936 "data_offset": 256, 00:25:11.936 "data_size": 7936 00:25:11.936 }, 00:25:11.936 { 00:25:11.936 "name": "BaseBdev2", 00:25:11.936 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:11.936 "is_configured": true, 00:25:11.936 "data_offset": 256, 00:25:11.936 "data_size": 7936 00:25:11.936 } 00:25:11.936 ] 00:25:11.936 }' 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.936 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:11.936 [2024-12-09 23:06:27.760417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.195 [2024-12-09 23:06:27.826291] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:12.195 [2024-12-09 23:06:27.826379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.195 [2024-12-09 23:06:27.826398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.196 [2024-12-09 23:06:27.826413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.196 "name": "raid_bdev1", 00:25:12.196 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:12.196 "strip_size_kb": 0, 00:25:12.196 "state": "online", 00:25:12.196 "raid_level": "raid1", 00:25:12.196 "superblock": true, 00:25:12.196 "num_base_bdevs": 2, 00:25:12.196 "num_base_bdevs_discovered": 1, 00:25:12.196 "num_base_bdevs_operational": 1, 00:25:12.196 "base_bdevs_list": [ 00:25:12.196 { 00:25:12.196 "name": null, 00:25:12.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.196 "is_configured": false, 00:25:12.196 "data_offset": 0, 00:25:12.196 "data_size": 7936 00:25:12.196 }, 00:25:12.196 { 00:25:12.196 "name": "BaseBdev2", 00:25:12.196 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:12.196 "is_configured": true, 00:25:12.196 "data_offset": 256, 00:25:12.196 "data_size": 7936 00:25:12.196 } 00:25:12.196 ] 00:25:12.196 }' 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.196 23:06:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:12.764 "name": "raid_bdev1", 00:25:12.764 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:12.764 "strip_size_kb": 0, 00:25:12.764 "state": "online", 00:25:12.764 "raid_level": "raid1", 00:25:12.764 "superblock": true, 00:25:12.764 "num_base_bdevs": 2, 00:25:12.764 "num_base_bdevs_discovered": 1, 00:25:12.764 "num_base_bdevs_operational": 1, 00:25:12.764 "base_bdevs_list": [ 00:25:12.764 { 00:25:12.764 "name": null, 00:25:12.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.764 "is_configured": false, 00:25:12.764 "data_offset": 0, 00:25:12.764 "data_size": 7936 00:25:12.764 }, 00:25:12.764 { 00:25:12.764 "name": "BaseBdev2", 00:25:12.764 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:12.764 "is_configured": true, 00:25:12.764 "data_offset": 256, 00:25:12.764 "data_size": 7936 00:25:12.764 } 00:25:12.764 ] 00:25:12.764 }' 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.764 [2024-12-09 23:06:28.496246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:12.764 [2024-12-09 23:06:28.513723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.764 23:06:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:12.764 [2024-12-09 23:06:28.515987] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.714 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:13.974 "name": "raid_bdev1", 00:25:13.974 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:13.974 "strip_size_kb": 0, 00:25:13.974 "state": "online", 00:25:13.974 "raid_level": "raid1", 00:25:13.974 "superblock": true, 00:25:13.974 "num_base_bdevs": 2, 00:25:13.974 "num_base_bdevs_discovered": 2, 00:25:13.974 "num_base_bdevs_operational": 2, 00:25:13.974 "process": { 00:25:13.974 "type": "rebuild", 00:25:13.974 "target": "spare", 00:25:13.974 "progress": { 00:25:13.974 "blocks": 2560, 00:25:13.974 "percent": 32 00:25:13.974 } 00:25:13.974 }, 00:25:13.974 "base_bdevs_list": [ 00:25:13.974 { 00:25:13.974 "name": "spare", 00:25:13.974 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:13.974 "is_configured": true, 00:25:13.974 "data_offset": 256, 00:25:13.974 "data_size": 7936 00:25:13.974 }, 00:25:13.974 { 00:25:13.974 "name": "BaseBdev2", 00:25:13.974 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:13.974 "is_configured": true, 00:25:13.974 "data_offset": 256, 00:25:13.974 "data_size": 7936 00:25:13.974 } 00:25:13.974 ] 00:25:13.974 }' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:13.974 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=747 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:13.974 "name": "raid_bdev1", 00:25:13.974 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:13.974 "strip_size_kb": 0, 00:25:13.974 "state": "online", 00:25:13.974 "raid_level": "raid1", 00:25:13.974 "superblock": true, 00:25:13.974 "num_base_bdevs": 2, 00:25:13.974 "num_base_bdevs_discovered": 2, 00:25:13.974 "num_base_bdevs_operational": 2, 00:25:13.974 "process": { 00:25:13.974 "type": "rebuild", 00:25:13.974 "target": "spare", 00:25:13.974 "progress": { 00:25:13.974 "blocks": 2816, 00:25:13.974 "percent": 35 00:25:13.974 } 00:25:13.974 }, 00:25:13.974 "base_bdevs_list": [ 00:25:13.974 { 00:25:13.974 "name": "spare", 00:25:13.974 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:13.974 "is_configured": true, 00:25:13.974 "data_offset": 256, 00:25:13.974 "data_size": 7936 00:25:13.974 }, 00:25:13.974 { 00:25:13.974 "name": "BaseBdev2", 00:25:13.974 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:13.974 "is_configured": true, 00:25:13.974 "data_offset": 256, 00:25:13.974 "data_size": 7936 00:25:13.974 } 00:25:13.974 ] 00:25:13.974 }' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.974 23:06:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:15.353 "name": "raid_bdev1", 00:25:15.353 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:15.353 "strip_size_kb": 0, 00:25:15.353 "state": "online", 00:25:15.353 "raid_level": "raid1", 00:25:15.353 "superblock": true, 00:25:15.353 "num_base_bdevs": 2, 00:25:15.353 "num_base_bdevs_discovered": 2, 00:25:15.353 "num_base_bdevs_operational": 2, 00:25:15.353 "process": { 00:25:15.353 "type": "rebuild", 00:25:15.353 "target": "spare", 00:25:15.353 "progress": { 00:25:15.353 "blocks": 5632, 00:25:15.353 "percent": 70 00:25:15.353 } 00:25:15.353 }, 00:25:15.353 "base_bdevs_list": [ 00:25:15.353 { 00:25:15.353 "name": "spare", 00:25:15.353 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:15.353 "is_configured": true, 00:25:15.353 "data_offset": 256, 00:25:15.353 "data_size": 7936 00:25:15.353 }, 00:25:15.353 { 00:25:15.353 "name": "BaseBdev2", 00:25:15.353 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:15.353 "is_configured": true, 00:25:15.353 "data_offset": 256, 00:25:15.353 "data_size": 7936 00:25:15.353 } 00:25:15.353 ] 00:25:15.353 }' 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.353 23:06:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:15.922 [2024-12-09 23:06:31.632378] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:15.922 [2024-12-09 23:06:31.632495] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:15.922 [2024-12-09 23:06:31.632647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.180 23:06:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.180 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:16.180 "name": "raid_bdev1", 00:25:16.180 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:16.180 "strip_size_kb": 0, 00:25:16.180 "state": "online", 00:25:16.180 "raid_level": "raid1", 00:25:16.180 "superblock": true, 00:25:16.180 "num_base_bdevs": 2, 00:25:16.180 "num_base_bdevs_discovered": 2, 00:25:16.180 "num_base_bdevs_operational": 2, 00:25:16.180 "base_bdevs_list": [ 00:25:16.180 { 00:25:16.180 "name": "spare", 00:25:16.180 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:16.180 "is_configured": true, 00:25:16.180 "data_offset": 256, 00:25:16.180 "data_size": 7936 00:25:16.180 }, 00:25:16.180 { 00:25:16.180 "name": "BaseBdev2", 00:25:16.180 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:16.180 "is_configured": true, 00:25:16.180 "data_offset": 256, 00:25:16.180 "data_size": 7936 00:25:16.180 } 00:25:16.180 ] 00:25:16.180 }' 00:25:16.180 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.439 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:16.440 "name": "raid_bdev1", 00:25:16.440 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:16.440 "strip_size_kb": 0, 00:25:16.440 "state": "online", 00:25:16.440 "raid_level": "raid1", 00:25:16.440 "superblock": true, 00:25:16.440 "num_base_bdevs": 2, 00:25:16.440 "num_base_bdevs_discovered": 2, 00:25:16.440 "num_base_bdevs_operational": 2, 00:25:16.440 "base_bdevs_list": [ 00:25:16.440 { 00:25:16.440 "name": "spare", 00:25:16.440 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:16.440 "is_configured": true, 00:25:16.440 "data_offset": 256, 00:25:16.440 "data_size": 7936 00:25:16.440 }, 00:25:16.440 { 00:25:16.440 "name": "BaseBdev2", 00:25:16.440 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:16.440 "is_configured": true, 00:25:16.440 "data_offset": 256, 00:25:16.440 "data_size": 7936 00:25:16.440 } 00:25:16.440 ] 00:25:16.440 }' 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.440 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.702 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.702 "name": "raid_bdev1", 00:25:16.702 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:16.702 "strip_size_kb": 0, 00:25:16.702 "state": "online", 00:25:16.702 "raid_level": "raid1", 00:25:16.702 "superblock": true, 00:25:16.702 "num_base_bdevs": 2, 00:25:16.702 "num_base_bdevs_discovered": 2, 00:25:16.702 "num_base_bdevs_operational": 2, 00:25:16.702 "base_bdevs_list": [ 00:25:16.702 { 00:25:16.702 "name": "spare", 00:25:16.702 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:16.702 "is_configured": true, 00:25:16.702 "data_offset": 256, 00:25:16.702 "data_size": 7936 00:25:16.702 }, 00:25:16.702 { 00:25:16.702 "name": "BaseBdev2", 00:25:16.702 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:16.702 "is_configured": true, 00:25:16.702 "data_offset": 256, 00:25:16.702 "data_size": 7936 00:25:16.702 } 00:25:16.702 ] 00:25:16.702 }' 00:25:16.702 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.702 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 [2024-12-09 23:06:32.719188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.963 [2024-12-09 23:06:32.719230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.963 [2024-12-09 23:06:32.719329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.963 [2024-12-09 23:06:32.719409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.963 [2024-12-09 23:06:32.719429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:16.963 23:06:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:17.221 /dev/nbd0 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.221 1+0 records in 00:25:17.221 1+0 records out 00:25:17.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381066 s, 10.7 MB/s 00:25:17.221 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:17.480 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:17.739 /dev/nbd1 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.739 1+0 records in 00:25:17.739 1+0 records out 00:25:17.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397261 s, 10.3 MB/s 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:17.739 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:17.997 23:06:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:18.255 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:18.256 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:18.256 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:18.256 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.256 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.256 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.515 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:18.515 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.515 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.515 [2024-12-09 23:06:34.117495] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:18.515 [2024-12-09 23:06:34.117598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.515 [2024-12-09 23:06:34.117632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:18.515 [2024-12-09 23:06:34.117643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.515 [2024-12-09 23:06:34.120133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.515 [2024-12-09 23:06:34.120181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:18.515 [2024-12-09 23:06:34.120267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:18.516 [2024-12-09 23:06:34.120348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:18.516 [2024-12-09 23:06:34.120543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.516 spare 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.516 [2024-12-09 23:06:34.220512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:18.516 [2024-12-09 23:06:34.220602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:18.516 [2024-12-09 23:06:34.220749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:25:18.516 [2024-12-09 23:06:34.220972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:18.516 [2024-12-09 23:06:34.220995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:18.516 [2024-12-09 23:06:34.221164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.516 "name": "raid_bdev1", 00:25:18.516 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:18.516 "strip_size_kb": 0, 00:25:18.516 "state": "online", 00:25:18.516 "raid_level": "raid1", 00:25:18.516 "superblock": true, 00:25:18.516 "num_base_bdevs": 2, 00:25:18.516 "num_base_bdevs_discovered": 2, 00:25:18.516 "num_base_bdevs_operational": 2, 00:25:18.516 "base_bdevs_list": [ 00:25:18.516 { 00:25:18.516 "name": "spare", 00:25:18.516 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:18.516 "is_configured": true, 00:25:18.516 "data_offset": 256, 00:25:18.516 "data_size": 7936 00:25:18.516 }, 00:25:18.516 { 00:25:18.516 "name": "BaseBdev2", 00:25:18.516 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:18.516 "is_configured": true, 00:25:18.516 "data_offset": 256, 00:25:18.516 "data_size": 7936 00:25:18.516 } 00:25:18.516 ] 00:25:18.516 }' 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.516 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:19.085 "name": "raid_bdev1", 00:25:19.085 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:19.085 "strip_size_kb": 0, 00:25:19.085 "state": "online", 00:25:19.085 "raid_level": "raid1", 00:25:19.085 "superblock": true, 00:25:19.085 "num_base_bdevs": 2, 00:25:19.085 "num_base_bdevs_discovered": 2, 00:25:19.085 "num_base_bdevs_operational": 2, 00:25:19.085 "base_bdevs_list": [ 00:25:19.085 { 00:25:19.085 "name": "spare", 00:25:19.085 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:19.085 "is_configured": true, 00:25:19.085 "data_offset": 256, 00:25:19.085 "data_size": 7936 00:25:19.085 }, 00:25:19.085 { 00:25:19.085 "name": "BaseBdev2", 00:25:19.085 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:19.085 "is_configured": true, 00:25:19.085 "data_offset": 256, 00:25:19.085 "data_size": 7936 00:25:19.085 } 00:25:19.085 ] 00:25:19.085 }' 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.085 [2024-12-09 23:06:34.896751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.085 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.345 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.345 "name": "raid_bdev1", 00:25:19.345 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:19.345 "strip_size_kb": 0, 00:25:19.345 "state": "online", 00:25:19.345 "raid_level": "raid1", 00:25:19.345 "superblock": true, 00:25:19.345 "num_base_bdevs": 2, 00:25:19.345 "num_base_bdevs_discovered": 1, 00:25:19.345 "num_base_bdevs_operational": 1, 00:25:19.345 "base_bdevs_list": [ 00:25:19.345 { 00:25:19.345 "name": null, 00:25:19.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.345 "is_configured": false, 00:25:19.345 "data_offset": 0, 00:25:19.345 "data_size": 7936 00:25:19.345 }, 00:25:19.345 { 00:25:19.345 "name": "BaseBdev2", 00:25:19.345 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:19.345 "is_configured": true, 00:25:19.345 "data_offset": 256, 00:25:19.345 "data_size": 7936 00:25:19.345 } 00:25:19.345 ] 00:25:19.345 }' 00:25:19.345 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.345 23:06:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.609 23:06:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:19.609 23:06:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.609 23:06:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.609 [2024-12-09 23:06:35.364765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.609 [2024-12-09 23:06:35.365002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:19.609 [2024-12-09 23:06:35.365033] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:19.609 [2024-12-09 23:06:35.365077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.609 [2024-12-09 23:06:35.382333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:25:19.609 23:06:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.609 23:06:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:19.609 [2024-12-09 23:06:35.384408] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.547 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:20.807 "name": "raid_bdev1", 00:25:20.807 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:20.807 "strip_size_kb": 0, 00:25:20.807 "state": "online", 00:25:20.807 "raid_level": "raid1", 00:25:20.807 "superblock": true, 00:25:20.807 "num_base_bdevs": 2, 00:25:20.807 "num_base_bdevs_discovered": 2, 00:25:20.807 "num_base_bdevs_operational": 2, 00:25:20.807 "process": { 00:25:20.807 "type": "rebuild", 00:25:20.807 "target": "spare", 00:25:20.807 "progress": { 00:25:20.807 "blocks": 2560, 00:25:20.807 "percent": 32 00:25:20.807 } 00:25:20.807 }, 00:25:20.807 "base_bdevs_list": [ 00:25:20.807 { 00:25:20.807 "name": "spare", 00:25:20.807 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:20.807 "is_configured": true, 00:25:20.807 "data_offset": 256, 00:25:20.807 "data_size": 7936 00:25:20.807 }, 00:25:20.807 { 00:25:20.807 "name": "BaseBdev2", 00:25:20.807 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:20.807 "is_configured": true, 00:25:20.807 "data_offset": 256, 00:25:20.807 "data_size": 7936 00:25:20.807 } 00:25:20.807 ] 00:25:20.807 }' 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.807 [2024-12-09 23:06:36.536825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:20.807 [2024-12-09 23:06:36.590871] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:20.807 [2024-12-09 23:06:36.590968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.807 [2024-12-09 23:06:36.590987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:20.807 [2024-12-09 23:06:36.591012] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:20.807 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.066 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.066 "name": "raid_bdev1", 00:25:21.066 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:21.066 "strip_size_kb": 0, 00:25:21.066 "state": "online", 00:25:21.066 "raid_level": "raid1", 00:25:21.066 "superblock": true, 00:25:21.066 "num_base_bdevs": 2, 00:25:21.066 "num_base_bdevs_discovered": 1, 00:25:21.067 "num_base_bdevs_operational": 1, 00:25:21.067 "base_bdevs_list": [ 00:25:21.067 { 00:25:21.067 "name": null, 00:25:21.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.067 "is_configured": false, 00:25:21.067 "data_offset": 0, 00:25:21.067 "data_size": 7936 00:25:21.067 }, 00:25:21.067 { 00:25:21.067 "name": "BaseBdev2", 00:25:21.067 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:21.067 "is_configured": true, 00:25:21.067 "data_offset": 256, 00:25:21.067 "data_size": 7936 00:25:21.067 } 00:25:21.067 ] 00:25:21.067 }' 00:25:21.067 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.067 23:06:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:21.326 23:06:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:21.326 23:06:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.327 23:06:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:21.327 [2024-12-09 23:06:37.098035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:21.327 [2024-12-09 23:06:37.098115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.327 [2024-12-09 23:06:37.098143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:21.327 [2024-12-09 23:06:37.098173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.327 [2024-12-09 23:06:37.098497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.327 [2024-12-09 23:06:37.098529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:21.327 [2024-12-09 23:06:37.098606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:21.327 [2024-12-09 23:06:37.098628] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:21.327 [2024-12-09 23:06:37.098641] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:21.327 [2024-12-09 23:06:37.098669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:21.327 [2024-12-09 23:06:37.115404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:25:21.327 spare 00:25:21.327 23:06:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.327 23:06:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:21.327 [2024-12-09 23:06:37.117635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.705 "name": "raid_bdev1", 00:25:22.705 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:22.705 "strip_size_kb": 0, 00:25:22.705 "state": "online", 00:25:22.705 "raid_level": "raid1", 00:25:22.705 "superblock": true, 00:25:22.705 "num_base_bdevs": 2, 00:25:22.705 "num_base_bdevs_discovered": 2, 00:25:22.705 "num_base_bdevs_operational": 2, 00:25:22.705 "process": { 00:25:22.705 "type": "rebuild", 00:25:22.705 "target": "spare", 00:25:22.705 "progress": { 00:25:22.705 "blocks": 2560, 00:25:22.705 "percent": 32 00:25:22.705 } 00:25:22.705 }, 00:25:22.705 "base_bdevs_list": [ 00:25:22.705 { 00:25:22.705 "name": "spare", 00:25:22.705 "uuid": "d7b94ea3-5fc7-5a4b-ba12-52961656579a", 00:25:22.705 "is_configured": true, 00:25:22.705 "data_offset": 256, 00:25:22.705 "data_size": 7936 00:25:22.705 }, 00:25:22.705 { 00:25:22.705 "name": "BaseBdev2", 00:25:22.705 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:22.705 "is_configured": true, 00:25:22.705 "data_offset": 256, 00:25:22.705 "data_size": 7936 00:25:22.705 } 00:25:22.705 ] 00:25:22.705 }' 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.705 [2024-12-09 23:06:38.273132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.705 [2024-12-09 23:06:38.324196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:22.705 [2024-12-09 23:06:38.324279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.705 [2024-12-09 23:06:38.324301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.705 [2024-12-09 23:06:38.324310] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.705 "name": "raid_bdev1", 00:25:22.705 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:22.705 "strip_size_kb": 0, 00:25:22.705 "state": "online", 00:25:22.705 "raid_level": "raid1", 00:25:22.705 "superblock": true, 00:25:22.705 "num_base_bdevs": 2, 00:25:22.705 "num_base_bdevs_discovered": 1, 00:25:22.705 "num_base_bdevs_operational": 1, 00:25:22.705 "base_bdevs_list": [ 00:25:22.705 { 00:25:22.705 "name": null, 00:25:22.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.705 "is_configured": false, 00:25:22.705 "data_offset": 0, 00:25:22.705 "data_size": 7936 00:25:22.705 }, 00:25:22.705 { 00:25:22.705 "name": "BaseBdev2", 00:25:22.705 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:22.705 "is_configured": true, 00:25:22.705 "data_offset": 256, 00:25:22.705 "data_size": 7936 00:25:22.705 } 00:25:22.705 ] 00:25:22.705 }' 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.705 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.275 "name": "raid_bdev1", 00:25:23.275 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:23.275 "strip_size_kb": 0, 00:25:23.275 "state": "online", 00:25:23.275 "raid_level": "raid1", 00:25:23.275 "superblock": true, 00:25:23.275 "num_base_bdevs": 2, 00:25:23.275 "num_base_bdevs_discovered": 1, 00:25:23.275 "num_base_bdevs_operational": 1, 00:25:23.275 "base_bdevs_list": [ 00:25:23.275 { 00:25:23.275 "name": null, 00:25:23.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.275 "is_configured": false, 00:25:23.275 "data_offset": 0, 00:25:23.275 "data_size": 7936 00:25:23.275 }, 00:25:23.275 { 00:25:23.275 "name": "BaseBdev2", 00:25:23.275 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:23.275 "is_configured": true, 00:25:23.275 "data_offset": 256, 00:25:23.275 "data_size": 7936 00:25:23.275 } 00:25:23.275 ] 00:25:23.275 }' 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.275 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.275 [2024-12-09 23:06:38.994420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.275 [2024-12-09 23:06:38.994519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.275 [2024-12-09 23:06:38.994550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:23.275 [2024-12-09 23:06:38.994560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.275 [2024-12-09 23:06:38.994856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.275 [2024-12-09 23:06:38.994877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.276 [2024-12-09 23:06:38.994943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:23.276 [2024-12-09 23:06:38.994957] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:23.276 [2024-12-09 23:06:38.994970] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:23.276 [2024-12-09 23:06:38.994981] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:23.276 BaseBdev1 00:25:23.276 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.276 23:06:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.212 "name": "raid_bdev1", 00:25:24.212 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:24.212 "strip_size_kb": 0, 00:25:24.212 "state": "online", 00:25:24.212 "raid_level": "raid1", 00:25:24.212 "superblock": true, 00:25:24.212 "num_base_bdevs": 2, 00:25:24.212 "num_base_bdevs_discovered": 1, 00:25:24.212 "num_base_bdevs_operational": 1, 00:25:24.212 "base_bdevs_list": [ 00:25:24.212 { 00:25:24.212 "name": null, 00:25:24.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.212 "is_configured": false, 00:25:24.212 "data_offset": 0, 00:25:24.212 "data_size": 7936 00:25:24.212 }, 00:25:24.212 { 00:25:24.212 "name": "BaseBdev2", 00:25:24.212 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:24.212 "is_configured": true, 00:25:24.212 "data_offset": 256, 00:25:24.212 "data_size": 7936 00:25:24.212 } 00:25:24.212 ] 00:25:24.212 }' 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.212 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.778 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:24.778 "name": "raid_bdev1", 00:25:24.778 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:24.778 "strip_size_kb": 0, 00:25:24.778 "state": "online", 00:25:24.778 "raid_level": "raid1", 00:25:24.778 "superblock": true, 00:25:24.778 "num_base_bdevs": 2, 00:25:24.778 "num_base_bdevs_discovered": 1, 00:25:24.778 "num_base_bdevs_operational": 1, 00:25:24.778 "base_bdevs_list": [ 00:25:24.778 { 00:25:24.778 "name": null, 00:25:24.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.778 "is_configured": false, 00:25:24.778 "data_offset": 0, 00:25:24.779 "data_size": 7936 00:25:24.779 }, 00:25:24.779 { 00:25:24.779 "name": "BaseBdev2", 00:25:24.779 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:24.779 "is_configured": true, 00:25:24.779 "data_offset": 256, 00:25:24.779 "data_size": 7936 00:25:24.779 } 00:25:24.779 ] 00:25:24.779 }' 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.779 [2024-12-09 23:06:40.620006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.779 [2024-12-09 23:06:40.620205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:24.779 [2024-12-09 23:06:40.620233] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:24.779 request: 00:25:24.779 { 00:25:24.779 "base_bdev": "BaseBdev1", 00:25:24.779 "raid_bdev": "raid_bdev1", 00:25:24.779 "method": "bdev_raid_add_base_bdev", 00:25:24.779 "req_id": 1 00:25:24.779 } 00:25:24.779 Got JSON-RPC error response 00:25:24.779 response: 00:25:24.779 { 00:25:24.779 "code": -22, 00:25:24.779 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:24.779 } 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:24.779 23:06:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.154 "name": "raid_bdev1", 00:25:26.154 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:26.154 "strip_size_kb": 0, 00:25:26.154 "state": "online", 00:25:26.154 "raid_level": "raid1", 00:25:26.154 "superblock": true, 00:25:26.154 "num_base_bdevs": 2, 00:25:26.154 "num_base_bdevs_discovered": 1, 00:25:26.154 "num_base_bdevs_operational": 1, 00:25:26.154 "base_bdevs_list": [ 00:25:26.154 { 00:25:26.154 "name": null, 00:25:26.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.154 "is_configured": false, 00:25:26.154 "data_offset": 0, 00:25:26.154 "data_size": 7936 00:25:26.154 }, 00:25:26.154 { 00:25:26.154 "name": "BaseBdev2", 00:25:26.154 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:26.154 "is_configured": true, 00:25:26.154 "data_offset": 256, 00:25:26.154 "data_size": 7936 00:25:26.154 } 00:25:26.154 ] 00:25:26.154 }' 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.154 23:06:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.414 "name": "raid_bdev1", 00:25:26.414 "uuid": "a02994a6-65af-4f5b-98ad-42dacb779b63", 00:25:26.414 "strip_size_kb": 0, 00:25:26.414 "state": "online", 00:25:26.414 "raid_level": "raid1", 00:25:26.414 "superblock": true, 00:25:26.414 "num_base_bdevs": 2, 00:25:26.414 "num_base_bdevs_discovered": 1, 00:25:26.414 "num_base_bdevs_operational": 1, 00:25:26.414 "base_bdevs_list": [ 00:25:26.414 { 00:25:26.414 "name": null, 00:25:26.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.414 "is_configured": false, 00:25:26.414 "data_offset": 0, 00:25:26.414 "data_size": 7936 00:25:26.414 }, 00:25:26.414 { 00:25:26.414 "name": "BaseBdev2", 00:25:26.414 "uuid": "d600263d-8df8-542c-83d5-d6c368bf4a2a", 00:25:26.414 "is_configured": true, 00:25:26.414 "data_offset": 256, 00:25:26.414 "data_size": 7936 00:25:26.414 } 00:25:26.414 ] 00:25:26.414 }' 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88503 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88503 ']' 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88503 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.414 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88503 00:25:26.672 killing process with pid 88503 00:25:26.672 Received shutdown signal, test time was about 60.000000 seconds 00:25:26.672 00:25:26.672 Latency(us) 00:25:26.672 [2024-12-09T23:06:42.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.672 [2024-12-09T23:06:42.528Z] =================================================================================================================== 00:25:26.672 [2024-12-09T23:06:42.528Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:26.672 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.673 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.673 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88503' 00:25:26.673 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88503 00:25:26.673 [2024-12-09 23:06:42.272330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:26.673 [2024-12-09 23:06:42.272504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.673 23:06:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88503 00:25:26.673 [2024-12-09 23:06:42.272569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.673 [2024-12-09 23:06:42.272599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:26.929 [2024-12-09 23:06:42.628172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:28.311 23:06:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:25:28.311 ************************************ 00:25:28.311 END TEST raid_rebuild_test_sb_md_separate 00:25:28.311 ************************************ 00:25:28.311 00:25:28.311 real 0m20.734s 00:25:28.311 user 0m27.194s 00:25:28.311 sys 0m2.819s 00:25:28.311 23:06:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.311 23:06:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:28.311 23:06:43 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:25:28.311 23:06:43 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:28.311 23:06:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:28.311 23:06:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.311 23:06:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:28.311 ************************************ 00:25:28.311 START TEST raid_state_function_test_sb_md_interleaved 00:25:28.311 ************************************ 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89197 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:28.311 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89197' 00:25:28.311 Process raid pid: 89197 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89197 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89197 ']' 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.312 23:06:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.312 [2024-12-09 23:06:44.048104] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:25:28.312 [2024-12-09 23:06:44.048313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.570 [2024-12-09 23:06:44.211576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.570 [2024-12-09 23:06:44.344373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.829 [2024-12-09 23:06:44.576341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.829 [2024-12-09 23:06:44.576388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:29.089 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.089 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:29.089 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:29.089 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.089 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.349 [2024-12-09 23:06:44.948594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.349 [2024-12-09 23:06:44.948709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.349 [2024-12-09 23:06:44.948727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.349 [2024-12-09 23:06:44.948739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.349 23:06:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.349 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.349 "name": "Existed_Raid", 00:25:29.349 "uuid": "386558ca-31a8-4055-b384-66bd45ebff41", 00:25:29.349 "strip_size_kb": 0, 00:25:29.349 "state": "configuring", 00:25:29.349 "raid_level": "raid1", 00:25:29.349 "superblock": true, 00:25:29.349 "num_base_bdevs": 2, 00:25:29.349 "num_base_bdevs_discovered": 0, 00:25:29.349 "num_base_bdevs_operational": 2, 00:25:29.349 "base_bdevs_list": [ 00:25:29.349 { 00:25:29.349 "name": "BaseBdev1", 00:25:29.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.349 "is_configured": false, 00:25:29.349 "data_offset": 0, 00:25:29.349 "data_size": 0 00:25:29.349 }, 00:25:29.349 { 00:25:29.349 "name": "BaseBdev2", 00:25:29.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.350 "is_configured": false, 00:25:29.350 "data_offset": 0, 00:25:29.350 "data_size": 0 00:25:29.350 } 00:25:29.350 ] 00:25:29.350 }' 00:25:29.350 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.350 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.609 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:29.609 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.609 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.609 [2024-12-09 23:06:45.435673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.609 [2024-12-09 23:06:45.435777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:29.609 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 [2024-12-09 23:06:45.447656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.610 [2024-12-09 23:06:45.447723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.610 [2024-12-09 23:06:45.447734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.610 [2024-12-09 23:06:45.447747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.610 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.877 [2024-12-09 23:06:45.501230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.877 BaseBdev1 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.877 [ 00:25:29.877 { 00:25:29.877 "name": "BaseBdev1", 00:25:29.877 "aliases": [ 00:25:29.877 "50983dc8-3c7b-47dd-9838-9314d87ef62d" 00:25:29.877 ], 00:25:29.877 "product_name": "Malloc disk", 00:25:29.877 "block_size": 4128, 00:25:29.877 "num_blocks": 8192, 00:25:29.877 "uuid": "50983dc8-3c7b-47dd-9838-9314d87ef62d", 00:25:29.877 "md_size": 32, 00:25:29.877 "md_interleave": true, 00:25:29.877 "dif_type": 0, 00:25:29.877 "assigned_rate_limits": { 00:25:29.877 "rw_ios_per_sec": 0, 00:25:29.877 "rw_mbytes_per_sec": 0, 00:25:29.877 "r_mbytes_per_sec": 0, 00:25:29.877 "w_mbytes_per_sec": 0 00:25:29.877 }, 00:25:29.877 "claimed": true, 00:25:29.877 "claim_type": "exclusive_write", 00:25:29.877 "zoned": false, 00:25:29.877 "supported_io_types": { 00:25:29.877 "read": true, 00:25:29.877 "write": true, 00:25:29.877 "unmap": true, 00:25:29.877 "flush": true, 00:25:29.877 "reset": true, 00:25:29.877 "nvme_admin": false, 00:25:29.877 "nvme_io": false, 00:25:29.877 "nvme_io_md": false, 00:25:29.877 "write_zeroes": true, 00:25:29.877 "zcopy": true, 00:25:29.877 "get_zone_info": false, 00:25:29.877 "zone_management": false, 00:25:29.877 "zone_append": false, 00:25:29.877 "compare": false, 00:25:29.877 "compare_and_write": false, 00:25:29.877 "abort": true, 00:25:29.877 "seek_hole": false, 00:25:29.877 "seek_data": false, 00:25:29.877 "copy": true, 00:25:29.877 "nvme_iov_md": false 00:25:29.877 }, 00:25:29.877 "memory_domains": [ 00:25:29.877 { 00:25:29.877 "dma_device_id": "system", 00:25:29.877 "dma_device_type": 1 00:25:29.877 }, 00:25:29.877 { 00:25:29.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.877 "dma_device_type": 2 00:25:29.877 } 00:25:29.877 ], 00:25:29.877 "driver_specific": {} 00:25:29.877 } 00:25:29.877 ] 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.877 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.878 "name": "Existed_Raid", 00:25:29.878 "uuid": "5e50c106-3a16-4e6f-88f2-3c805fea9b80", 00:25:29.878 "strip_size_kb": 0, 00:25:29.878 "state": "configuring", 00:25:29.878 "raid_level": "raid1", 00:25:29.878 "superblock": true, 00:25:29.878 "num_base_bdevs": 2, 00:25:29.878 "num_base_bdevs_discovered": 1, 00:25:29.878 "num_base_bdevs_operational": 2, 00:25:29.878 "base_bdevs_list": [ 00:25:29.878 { 00:25:29.878 "name": "BaseBdev1", 00:25:29.878 "uuid": "50983dc8-3c7b-47dd-9838-9314d87ef62d", 00:25:29.878 "is_configured": true, 00:25:29.878 "data_offset": 256, 00:25:29.878 "data_size": 7936 00:25:29.878 }, 00:25:29.878 { 00:25:29.878 "name": "BaseBdev2", 00:25:29.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.878 "is_configured": false, 00:25:29.878 "data_offset": 0, 00:25:29.878 "data_size": 0 00:25:29.878 } 00:25:29.878 ] 00:25:29.878 }' 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.878 23:06:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.449 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:30.449 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.449 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.450 [2024-12-09 23:06:46.052751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:30.450 [2024-12-09 23:06:46.052819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.450 [2024-12-09 23:06:46.064802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:30.450 [2024-12-09 23:06:46.066950] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:30.450 [2024-12-09 23:06:46.067007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.450 "name": "Existed_Raid", 00:25:30.450 "uuid": "9bbdbb3e-25b4-41b7-a209-0cb3cbea6091", 00:25:30.450 "strip_size_kb": 0, 00:25:30.450 "state": "configuring", 00:25:30.450 "raid_level": "raid1", 00:25:30.450 "superblock": true, 00:25:30.450 "num_base_bdevs": 2, 00:25:30.450 "num_base_bdevs_discovered": 1, 00:25:30.450 "num_base_bdevs_operational": 2, 00:25:30.450 "base_bdevs_list": [ 00:25:30.450 { 00:25:30.450 "name": "BaseBdev1", 00:25:30.450 "uuid": "50983dc8-3c7b-47dd-9838-9314d87ef62d", 00:25:30.450 "is_configured": true, 00:25:30.450 "data_offset": 256, 00:25:30.450 "data_size": 7936 00:25:30.450 }, 00:25:30.450 { 00:25:30.450 "name": "BaseBdev2", 00:25:30.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.450 "is_configured": false, 00:25:30.450 "data_offset": 0, 00:25:30.450 "data_size": 0 00:25:30.450 } 00:25:30.450 ] 00:25:30.450 }' 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.450 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.708 [2024-12-09 23:06:46.551148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.708 [2024-12-09 23:06:46.551436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:30.708 [2024-12-09 23:06:46.551453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:30.708 [2024-12-09 23:06:46.551562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:30.708 [2024-12-09 23:06:46.551648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:30.708 [2024-12-09 23:06:46.551664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:30.708 [2024-12-09 23:06:46.551734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.708 BaseBdev2 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.708 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.968 [ 00:25:30.968 { 00:25:30.968 "name": "BaseBdev2", 00:25:30.968 "aliases": [ 00:25:30.968 "51725d2f-b671-4a9c-a149-b358afc08516" 00:25:30.968 ], 00:25:30.968 "product_name": "Malloc disk", 00:25:30.968 "block_size": 4128, 00:25:30.968 "num_blocks": 8192, 00:25:30.968 "uuid": "51725d2f-b671-4a9c-a149-b358afc08516", 00:25:30.968 "md_size": 32, 00:25:30.968 "md_interleave": true, 00:25:30.968 "dif_type": 0, 00:25:30.968 "assigned_rate_limits": { 00:25:30.968 "rw_ios_per_sec": 0, 00:25:30.968 "rw_mbytes_per_sec": 0, 00:25:30.968 "r_mbytes_per_sec": 0, 00:25:30.968 "w_mbytes_per_sec": 0 00:25:30.968 }, 00:25:30.968 "claimed": true, 00:25:30.968 "claim_type": "exclusive_write", 00:25:30.968 "zoned": false, 00:25:30.968 "supported_io_types": { 00:25:30.968 "read": true, 00:25:30.968 "write": true, 00:25:30.968 "unmap": true, 00:25:30.968 "flush": true, 00:25:30.968 "reset": true, 00:25:30.968 "nvme_admin": false, 00:25:30.968 "nvme_io": false, 00:25:30.968 "nvme_io_md": false, 00:25:30.968 "write_zeroes": true, 00:25:30.968 "zcopy": true, 00:25:30.968 "get_zone_info": false, 00:25:30.968 "zone_management": false, 00:25:30.968 "zone_append": false, 00:25:30.968 "compare": false, 00:25:30.968 "compare_and_write": false, 00:25:30.968 "abort": true, 00:25:30.968 "seek_hole": false, 00:25:30.968 "seek_data": false, 00:25:30.968 "copy": true, 00:25:30.968 "nvme_iov_md": false 00:25:30.968 }, 00:25:30.968 "memory_domains": [ 00:25:30.968 { 00:25:30.968 "dma_device_id": "system", 00:25:30.968 "dma_device_type": 1 00:25:30.968 }, 00:25:30.968 { 00:25:30.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.968 "dma_device_type": 2 00:25:30.968 } 00:25:30.968 ], 00:25:30.968 "driver_specific": {} 00:25:30.968 } 00:25:30.968 ] 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.968 "name": "Existed_Raid", 00:25:30.968 "uuid": "9bbdbb3e-25b4-41b7-a209-0cb3cbea6091", 00:25:30.968 "strip_size_kb": 0, 00:25:30.968 "state": "online", 00:25:30.968 "raid_level": "raid1", 00:25:30.968 "superblock": true, 00:25:30.968 "num_base_bdevs": 2, 00:25:30.968 "num_base_bdevs_discovered": 2, 00:25:30.968 "num_base_bdevs_operational": 2, 00:25:30.968 "base_bdevs_list": [ 00:25:30.968 { 00:25:30.968 "name": "BaseBdev1", 00:25:30.968 "uuid": "50983dc8-3c7b-47dd-9838-9314d87ef62d", 00:25:30.968 "is_configured": true, 00:25:30.968 "data_offset": 256, 00:25:30.968 "data_size": 7936 00:25:30.968 }, 00:25:30.968 { 00:25:30.968 "name": "BaseBdev2", 00:25:30.968 "uuid": "51725d2f-b671-4a9c-a149-b358afc08516", 00:25:30.968 "is_configured": true, 00:25:30.968 "data_offset": 256, 00:25:30.968 "data_size": 7936 00:25:30.968 } 00:25:30.968 ] 00:25:30.968 }' 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.968 23:06:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.538 [2024-12-09 23:06:47.102751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:31.538 "name": "Existed_Raid", 00:25:31.538 "aliases": [ 00:25:31.538 "9bbdbb3e-25b4-41b7-a209-0cb3cbea6091" 00:25:31.538 ], 00:25:31.538 "product_name": "Raid Volume", 00:25:31.538 "block_size": 4128, 00:25:31.538 "num_blocks": 7936, 00:25:31.538 "uuid": "9bbdbb3e-25b4-41b7-a209-0cb3cbea6091", 00:25:31.538 "md_size": 32, 00:25:31.538 "md_interleave": true, 00:25:31.538 "dif_type": 0, 00:25:31.538 "assigned_rate_limits": { 00:25:31.538 "rw_ios_per_sec": 0, 00:25:31.538 "rw_mbytes_per_sec": 0, 00:25:31.538 "r_mbytes_per_sec": 0, 00:25:31.538 "w_mbytes_per_sec": 0 00:25:31.538 }, 00:25:31.538 "claimed": false, 00:25:31.538 "zoned": false, 00:25:31.538 "supported_io_types": { 00:25:31.538 "read": true, 00:25:31.538 "write": true, 00:25:31.538 "unmap": false, 00:25:31.538 "flush": false, 00:25:31.538 "reset": true, 00:25:31.538 "nvme_admin": false, 00:25:31.538 "nvme_io": false, 00:25:31.538 "nvme_io_md": false, 00:25:31.538 "write_zeroes": true, 00:25:31.538 "zcopy": false, 00:25:31.538 "get_zone_info": false, 00:25:31.538 "zone_management": false, 00:25:31.538 "zone_append": false, 00:25:31.538 "compare": false, 00:25:31.538 "compare_and_write": false, 00:25:31.538 "abort": false, 00:25:31.538 "seek_hole": false, 00:25:31.538 "seek_data": false, 00:25:31.538 "copy": false, 00:25:31.538 "nvme_iov_md": false 00:25:31.538 }, 00:25:31.538 "memory_domains": [ 00:25:31.538 { 00:25:31.538 "dma_device_id": "system", 00:25:31.538 "dma_device_type": 1 00:25:31.538 }, 00:25:31.538 { 00:25:31.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.538 "dma_device_type": 2 00:25:31.538 }, 00:25:31.538 { 00:25:31.538 "dma_device_id": "system", 00:25:31.538 "dma_device_type": 1 00:25:31.538 }, 00:25:31.538 { 00:25:31.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.538 "dma_device_type": 2 00:25:31.538 } 00:25:31.538 ], 00:25:31.538 "driver_specific": { 00:25:31.538 "raid": { 00:25:31.538 "uuid": "9bbdbb3e-25b4-41b7-a209-0cb3cbea6091", 00:25:31.538 "strip_size_kb": 0, 00:25:31.538 "state": "online", 00:25:31.538 "raid_level": "raid1", 00:25:31.538 "superblock": true, 00:25:31.538 "num_base_bdevs": 2, 00:25:31.538 "num_base_bdevs_discovered": 2, 00:25:31.538 "num_base_bdevs_operational": 2, 00:25:31.538 "base_bdevs_list": [ 00:25:31.538 { 00:25:31.538 "name": "BaseBdev1", 00:25:31.538 "uuid": "50983dc8-3c7b-47dd-9838-9314d87ef62d", 00:25:31.538 "is_configured": true, 00:25:31.538 "data_offset": 256, 00:25:31.538 "data_size": 7936 00:25:31.538 }, 00:25:31.538 { 00:25:31.538 "name": "BaseBdev2", 00:25:31.538 "uuid": "51725d2f-b671-4a9c-a149-b358afc08516", 00:25:31.538 "is_configured": true, 00:25:31.538 "data_offset": 256, 00:25:31.538 "data_size": 7936 00:25:31.538 } 00:25:31.538 ] 00:25:31.538 } 00:25:31.538 } 00:25:31.538 }' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:31.538 BaseBdev2' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.538 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.539 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.539 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:31.539 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:31.539 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:31.539 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.539 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.539 [2024-12-09 23:06:47.334064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.799 "name": "Existed_Raid", 00:25:31.799 "uuid": "9bbdbb3e-25b4-41b7-a209-0cb3cbea6091", 00:25:31.799 "strip_size_kb": 0, 00:25:31.799 "state": "online", 00:25:31.799 "raid_level": "raid1", 00:25:31.799 "superblock": true, 00:25:31.799 "num_base_bdevs": 2, 00:25:31.799 "num_base_bdevs_discovered": 1, 00:25:31.799 "num_base_bdevs_operational": 1, 00:25:31.799 "base_bdevs_list": [ 00:25:31.799 { 00:25:31.799 "name": null, 00:25:31.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.799 "is_configured": false, 00:25:31.799 "data_offset": 0, 00:25:31.799 "data_size": 7936 00:25:31.799 }, 00:25:31.799 { 00:25:31.799 "name": "BaseBdev2", 00:25:31.799 "uuid": "51725d2f-b671-4a9c-a149-b358afc08516", 00:25:31.799 "is_configured": true, 00:25:31.799 "data_offset": 256, 00:25:31.799 "data_size": 7936 00:25:31.799 } 00:25:31.799 ] 00:25:31.799 }' 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.799 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.057 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:32.057 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:32.057 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.057 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.057 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.057 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:32.316 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.316 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:32.316 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:32.316 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:32.316 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.316 23:06:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.316 [2024-12-09 23:06:47.964978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:32.316 [2024-12-09 23:06:47.965116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:32.316 [2024-12-09 23:06:48.083326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:32.316 [2024-12-09 23:06:48.083394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:32.316 [2024-12-09 23:06:48.083408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89197 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89197 ']' 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89197 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.316 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89197 00:25:32.576 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.576 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.576 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89197' 00:25:32.576 killing process with pid 89197 00:25:32.576 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89197 00:25:32.576 [2024-12-09 23:06:48.173850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:32.576 23:06:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89197 00:25:32.576 [2024-12-09 23:06:48.194189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:33.956 ************************************ 00:25:33.956 END TEST raid_state_function_test_sb_md_interleaved 00:25:33.956 ************************************ 00:25:33.956 23:06:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:25:33.956 00:25:33.956 real 0m5.600s 00:25:33.956 user 0m8.030s 00:25:33.956 sys 0m0.920s 00:25:33.956 23:06:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.956 23:06:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.956 23:06:49 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:33.956 23:06:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:33.956 23:06:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.956 23:06:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:33.956 ************************************ 00:25:33.956 START TEST raid_superblock_test_md_interleaved 00:25:33.956 ************************************ 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89456 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89456 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89456 ']' 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.956 23:06:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:33.956 [2024-12-09 23:06:49.703442] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:25:33.956 [2024-12-09 23:06:49.703626] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89456 ] 00:25:34.214 [2024-12-09 23:06:49.887225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.214 [2024-12-09 23:06:50.025637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.474 [2024-12-09 23:06:50.268291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:34.474 [2024-12-09 23:06:50.268448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.043 malloc1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.043 [2024-12-09 23:06:50.702571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:35.043 [2024-12-09 23:06:50.702738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.043 [2024-12-09 23:06:50.702777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:35.043 [2024-12-09 23:06:50.702791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.043 [2024-12-09 23:06:50.705133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.043 [2024-12-09 23:06:50.705182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:35.043 pt1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.043 malloc2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.043 [2024-12-09 23:06:50.761023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:35.043 [2024-12-09 23:06:50.761175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.043 [2024-12-09 23:06:50.761206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:35.043 [2024-12-09 23:06:50.761218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.043 [2024-12-09 23:06:50.763498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.043 [2024-12-09 23:06:50.763542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:35.043 pt2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.043 [2024-12-09 23:06:50.769056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:35.043 [2024-12-09 23:06:50.771232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:35.043 [2024-12-09 23:06:50.771495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:35.043 [2024-12-09 23:06:50.771512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:35.043 [2024-12-09 23:06:50.771632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:35.043 [2024-12-09 23:06:50.771796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:35.043 [2024-12-09 23:06:50.771816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:35.043 [2024-12-09 23:06:50.771916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.043 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.044 "name": "raid_bdev1", 00:25:35.044 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:35.044 "strip_size_kb": 0, 00:25:35.044 "state": "online", 00:25:35.044 "raid_level": "raid1", 00:25:35.044 "superblock": true, 00:25:35.044 "num_base_bdevs": 2, 00:25:35.044 "num_base_bdevs_discovered": 2, 00:25:35.044 "num_base_bdevs_operational": 2, 00:25:35.044 "base_bdevs_list": [ 00:25:35.044 { 00:25:35.044 "name": "pt1", 00:25:35.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.044 "is_configured": true, 00:25:35.044 "data_offset": 256, 00:25:35.044 "data_size": 7936 00:25:35.044 }, 00:25:35.044 { 00:25:35.044 "name": "pt2", 00:25:35.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.044 "is_configured": true, 00:25:35.044 "data_offset": 256, 00:25:35.044 "data_size": 7936 00:25:35.044 } 00:25:35.044 ] 00:25:35.044 }' 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.044 23:06:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:35.613 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.614 [2024-12-09 23:06:51.220913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.614 "name": "raid_bdev1", 00:25:35.614 "aliases": [ 00:25:35.614 "ba43654d-185a-445a-a0c3-1cb7747e8365" 00:25:35.614 ], 00:25:35.614 "product_name": "Raid Volume", 00:25:35.614 "block_size": 4128, 00:25:35.614 "num_blocks": 7936, 00:25:35.614 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:35.614 "md_size": 32, 00:25:35.614 "md_interleave": true, 00:25:35.614 "dif_type": 0, 00:25:35.614 "assigned_rate_limits": { 00:25:35.614 "rw_ios_per_sec": 0, 00:25:35.614 "rw_mbytes_per_sec": 0, 00:25:35.614 "r_mbytes_per_sec": 0, 00:25:35.614 "w_mbytes_per_sec": 0 00:25:35.614 }, 00:25:35.614 "claimed": false, 00:25:35.614 "zoned": false, 00:25:35.614 "supported_io_types": { 00:25:35.614 "read": true, 00:25:35.614 "write": true, 00:25:35.614 "unmap": false, 00:25:35.614 "flush": false, 00:25:35.614 "reset": true, 00:25:35.614 "nvme_admin": false, 00:25:35.614 "nvme_io": false, 00:25:35.614 "nvme_io_md": false, 00:25:35.614 "write_zeroes": true, 00:25:35.614 "zcopy": false, 00:25:35.614 "get_zone_info": false, 00:25:35.614 "zone_management": false, 00:25:35.614 "zone_append": false, 00:25:35.614 "compare": false, 00:25:35.614 "compare_and_write": false, 00:25:35.614 "abort": false, 00:25:35.614 "seek_hole": false, 00:25:35.614 "seek_data": false, 00:25:35.614 "copy": false, 00:25:35.614 "nvme_iov_md": false 00:25:35.614 }, 00:25:35.614 "memory_domains": [ 00:25:35.614 { 00:25:35.614 "dma_device_id": "system", 00:25:35.614 "dma_device_type": 1 00:25:35.614 }, 00:25:35.614 { 00:25:35.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.614 "dma_device_type": 2 00:25:35.614 }, 00:25:35.614 { 00:25:35.614 "dma_device_id": "system", 00:25:35.614 "dma_device_type": 1 00:25:35.614 }, 00:25:35.614 { 00:25:35.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.614 "dma_device_type": 2 00:25:35.614 } 00:25:35.614 ], 00:25:35.614 "driver_specific": { 00:25:35.614 "raid": { 00:25:35.614 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:35.614 "strip_size_kb": 0, 00:25:35.614 "state": "online", 00:25:35.614 "raid_level": "raid1", 00:25:35.614 "superblock": true, 00:25:35.614 "num_base_bdevs": 2, 00:25:35.614 "num_base_bdevs_discovered": 2, 00:25:35.614 "num_base_bdevs_operational": 2, 00:25:35.614 "base_bdevs_list": [ 00:25:35.614 { 00:25:35.614 "name": "pt1", 00:25:35.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.614 "is_configured": true, 00:25:35.614 "data_offset": 256, 00:25:35.614 "data_size": 7936 00:25:35.614 }, 00:25:35.614 { 00:25:35.614 "name": "pt2", 00:25:35.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.614 "is_configured": true, 00:25:35.614 "data_offset": 256, 00:25:35.614 "data_size": 7936 00:25:35.614 } 00:25:35.614 ] 00:25:35.614 } 00:25:35.614 } 00:25:35.614 }' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:35.614 pt2' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:35.614 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.880 [2024-12-09 23:06:51.476486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ba43654d-185a-445a-a0c3-1cb7747e8365 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z ba43654d-185a-445a-a0c3-1cb7747e8365 ']' 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.880 [2024-12-09 23:06:51.504020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.880 [2024-12-09 23:06:51.504098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.880 [2024-12-09 23:06:51.504239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.880 [2024-12-09 23:06:51.504342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.880 [2024-12-09 23:06:51.504398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.880 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.881 [2024-12-09 23:06:51.635873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:35.881 [2024-12-09 23:06:51.638416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:35.881 [2024-12-09 23:06:51.638603] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:35.881 [2024-12-09 23:06:51.638737] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:35.881 [2024-12-09 23:06:51.638796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.881 [2024-12-09 23:06:51.638832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:35.881 request: 00:25:35.881 { 00:25:35.881 "name": "raid_bdev1", 00:25:35.881 "raid_level": "raid1", 00:25:35.881 "base_bdevs": [ 00:25:35.881 "malloc1", 00:25:35.881 "malloc2" 00:25:35.881 ], 00:25:35.881 "superblock": false, 00:25:35.881 "method": "bdev_raid_create", 00:25:35.881 "req_id": 1 00:25:35.881 } 00:25:35.881 Got JSON-RPC error response 00:25:35.881 response: 00:25:35.881 { 00:25:35.881 "code": -17, 00:25:35.881 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:35.881 } 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:35.881 [2024-12-09 23:06:51.711679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:35.881 [2024-12-09 23:06:51.711829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.881 [2024-12-09 23:06:51.711872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:35.881 [2024-12-09 23:06:51.711925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.881 [2024-12-09 23:06:51.714228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.881 [2024-12-09 23:06:51.714347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:35.881 [2024-12-09 23:06:51.714500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:35.881 [2024-12-09 23:06:51.714645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:35.881 pt1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.881 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.146 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.146 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.146 "name": "raid_bdev1", 00:25:36.146 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:36.146 "strip_size_kb": 0, 00:25:36.146 "state": "configuring", 00:25:36.146 "raid_level": "raid1", 00:25:36.146 "superblock": true, 00:25:36.146 "num_base_bdevs": 2, 00:25:36.146 "num_base_bdevs_discovered": 1, 00:25:36.146 "num_base_bdevs_operational": 2, 00:25:36.146 "base_bdevs_list": [ 00:25:36.146 { 00:25:36.146 "name": "pt1", 00:25:36.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.146 "is_configured": true, 00:25:36.146 "data_offset": 256, 00:25:36.146 "data_size": 7936 00:25:36.146 }, 00:25:36.146 { 00:25:36.146 "name": null, 00:25:36.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.146 "is_configured": false, 00:25:36.146 "data_offset": 256, 00:25:36.146 "data_size": 7936 00:25:36.146 } 00:25:36.146 ] 00:25:36.146 }' 00:25:36.146 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.146 23:06:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.405 [2024-12-09 23:06:52.206862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:36.405 [2024-12-09 23:06:52.207015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.405 [2024-12-09 23:06:52.207063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:36.405 [2024-12-09 23:06:52.207149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.405 [2024-12-09 23:06:52.207374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.405 [2024-12-09 23:06:52.207431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:36.405 [2024-12-09 23:06:52.207540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:36.405 [2024-12-09 23:06:52.207601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:36.405 [2024-12-09 23:06:52.207730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:36.405 [2024-12-09 23:06:52.207775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:36.405 [2024-12-09 23:06:52.207883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:36.405 [2024-12-09 23:06:52.208003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:36.405 [2024-12-09 23:06:52.208015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:36.405 [2024-12-09 23:06:52.208093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.405 pt2 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.405 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.674 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.675 "name": "raid_bdev1", 00:25:36.675 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:36.675 "strip_size_kb": 0, 00:25:36.675 "state": "online", 00:25:36.675 "raid_level": "raid1", 00:25:36.675 "superblock": true, 00:25:36.675 "num_base_bdevs": 2, 00:25:36.675 "num_base_bdevs_discovered": 2, 00:25:36.675 "num_base_bdevs_operational": 2, 00:25:36.675 "base_bdevs_list": [ 00:25:36.675 { 00:25:36.675 "name": "pt1", 00:25:36.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.675 "is_configured": true, 00:25:36.675 "data_offset": 256, 00:25:36.675 "data_size": 7936 00:25:36.675 }, 00:25:36.675 { 00:25:36.675 "name": "pt2", 00:25:36.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.675 "is_configured": true, 00:25:36.675 "data_offset": 256, 00:25:36.675 "data_size": 7936 00:25:36.675 } 00:25:36.675 ] 00:25:36.675 }' 00:25:36.675 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.675 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:36.934 [2024-12-09 23:06:52.702328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.934 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.934 "name": "raid_bdev1", 00:25:36.934 "aliases": [ 00:25:36.934 "ba43654d-185a-445a-a0c3-1cb7747e8365" 00:25:36.934 ], 00:25:36.934 "product_name": "Raid Volume", 00:25:36.934 "block_size": 4128, 00:25:36.934 "num_blocks": 7936, 00:25:36.934 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:36.934 "md_size": 32, 00:25:36.934 "md_interleave": true, 00:25:36.934 "dif_type": 0, 00:25:36.934 "assigned_rate_limits": { 00:25:36.934 "rw_ios_per_sec": 0, 00:25:36.934 "rw_mbytes_per_sec": 0, 00:25:36.934 "r_mbytes_per_sec": 0, 00:25:36.934 "w_mbytes_per_sec": 0 00:25:36.934 }, 00:25:36.934 "claimed": false, 00:25:36.934 "zoned": false, 00:25:36.934 "supported_io_types": { 00:25:36.934 "read": true, 00:25:36.934 "write": true, 00:25:36.934 "unmap": false, 00:25:36.934 "flush": false, 00:25:36.934 "reset": true, 00:25:36.934 "nvme_admin": false, 00:25:36.934 "nvme_io": false, 00:25:36.934 "nvme_io_md": false, 00:25:36.934 "write_zeroes": true, 00:25:36.934 "zcopy": false, 00:25:36.934 "get_zone_info": false, 00:25:36.934 "zone_management": false, 00:25:36.934 "zone_append": false, 00:25:36.934 "compare": false, 00:25:36.934 "compare_and_write": false, 00:25:36.934 "abort": false, 00:25:36.934 "seek_hole": false, 00:25:36.934 "seek_data": false, 00:25:36.934 "copy": false, 00:25:36.934 "nvme_iov_md": false 00:25:36.934 }, 00:25:36.934 "memory_domains": [ 00:25:36.934 { 00:25:36.934 "dma_device_id": "system", 00:25:36.934 "dma_device_type": 1 00:25:36.934 }, 00:25:36.934 { 00:25:36.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.934 "dma_device_type": 2 00:25:36.934 }, 00:25:36.934 { 00:25:36.934 "dma_device_id": "system", 00:25:36.934 "dma_device_type": 1 00:25:36.934 }, 00:25:36.934 { 00:25:36.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.934 "dma_device_type": 2 00:25:36.934 } 00:25:36.934 ], 00:25:36.934 "driver_specific": { 00:25:36.934 "raid": { 00:25:36.934 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:36.934 "strip_size_kb": 0, 00:25:36.934 "state": "online", 00:25:36.934 "raid_level": "raid1", 00:25:36.934 "superblock": true, 00:25:36.935 "num_base_bdevs": 2, 00:25:36.935 "num_base_bdevs_discovered": 2, 00:25:36.935 "num_base_bdevs_operational": 2, 00:25:36.935 "base_bdevs_list": [ 00:25:36.935 { 00:25:36.935 "name": "pt1", 00:25:36.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.935 "is_configured": true, 00:25:36.935 "data_offset": 256, 00:25:36.935 "data_size": 7936 00:25:36.935 }, 00:25:36.935 { 00:25:36.935 "name": "pt2", 00:25:36.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.935 "is_configured": true, 00:25:36.935 "data_offset": 256, 00:25:36.935 "data_size": 7936 00:25:36.935 } 00:25:36.935 ] 00:25:36.935 } 00:25:36.935 } 00:25:36.935 }' 00:25:36.935 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:36.935 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:36.935 pt2' 00:25:36.935 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.195 [2024-12-09 23:06:52.953961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' ba43654d-185a-445a-a0c3-1cb7747e8365 '!=' ba43654d-185a-445a-a0c3-1cb7747e8365 ']' 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:37.195 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.196 [2024-12-09 23:06:52.985671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.196 23:06:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.196 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.196 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.196 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.196 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.196 "name": "raid_bdev1", 00:25:37.196 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:37.196 "strip_size_kb": 0, 00:25:37.196 "state": "online", 00:25:37.196 "raid_level": "raid1", 00:25:37.196 "superblock": true, 00:25:37.196 "num_base_bdevs": 2, 00:25:37.196 "num_base_bdevs_discovered": 1, 00:25:37.196 "num_base_bdevs_operational": 1, 00:25:37.196 "base_bdevs_list": [ 00:25:37.196 { 00:25:37.196 "name": null, 00:25:37.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.196 "is_configured": false, 00:25:37.196 "data_offset": 0, 00:25:37.196 "data_size": 7936 00:25:37.196 }, 00:25:37.196 { 00:25:37.196 "name": "pt2", 00:25:37.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.196 "is_configured": true, 00:25:37.196 "data_offset": 256, 00:25:37.196 "data_size": 7936 00:25:37.196 } 00:25:37.196 ] 00:25:37.196 }' 00:25:37.196 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.196 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.765 [2024-12-09 23:06:53.476746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.765 [2024-12-09 23:06:53.476837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.765 [2024-12-09 23:06:53.476956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.765 [2024-12-09 23:06:53.477045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.765 [2024-12-09 23:06:53.477102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.765 [2024-12-09 23:06:53.548723] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:37.765 [2024-12-09 23:06:53.548874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.765 [2024-12-09 23:06:53.548899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:37.765 [2024-12-09 23:06:53.548913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.765 [2024-12-09 23:06:53.551190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.765 [2024-12-09 23:06:53.551242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:37.765 [2024-12-09 23:06:53.551313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:37.765 [2024-12-09 23:06:53.551385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.765 [2024-12-09 23:06:53.551491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:37.765 [2024-12-09 23:06:53.551507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:37.765 [2024-12-09 23:06:53.551628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:37.765 [2024-12-09 23:06:53.551715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:37.765 [2024-12-09 23:06:53.551724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:37.765 [2024-12-09 23:06:53.551801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.765 pt2 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.765 "name": "raid_bdev1", 00:25:37.765 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:37.765 "strip_size_kb": 0, 00:25:37.765 "state": "online", 00:25:37.765 "raid_level": "raid1", 00:25:37.765 "superblock": true, 00:25:37.765 "num_base_bdevs": 2, 00:25:37.765 "num_base_bdevs_discovered": 1, 00:25:37.765 "num_base_bdevs_operational": 1, 00:25:37.765 "base_bdevs_list": [ 00:25:37.765 { 00:25:37.765 "name": null, 00:25:37.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.765 "is_configured": false, 00:25:37.765 "data_offset": 256, 00:25:37.765 "data_size": 7936 00:25:37.765 }, 00:25:37.765 { 00:25:37.765 "name": "pt2", 00:25:37.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.765 "is_configured": true, 00:25:37.765 "data_offset": 256, 00:25:37.765 "data_size": 7936 00:25:37.765 } 00:25:37.765 ] 00:25:37.765 }' 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.765 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.335 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:38.335 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.335 23:06:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.335 [2024-12-09 23:06:54.003892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.335 [2024-12-09 23:06:54.003988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:38.335 [2024-12-09 23:06:54.004085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.335 [2024-12-09 23:06:54.004145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.335 [2024-12-09 23:06:54.004157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.335 [2024-12-09 23:06:54.067856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:38.335 [2024-12-09 23:06:54.067998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.335 [2024-12-09 23:06:54.068046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:38.335 [2024-12-09 23:06:54.068078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.335 [2024-12-09 23:06:54.070351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.335 [2024-12-09 23:06:54.070442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:38.335 [2024-12-09 23:06:54.070548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:38.335 [2024-12-09 23:06:54.070628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:38.335 [2024-12-09 23:06:54.070765] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:38.335 [2024-12-09 23:06:54.070778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.335 [2024-12-09 23:06:54.070800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:38.335 [2024-12-09 23:06:54.070862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:38.335 [2024-12-09 23:06:54.070943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:38.335 [2024-12-09 23:06:54.070953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:38.335 [2024-12-09 23:06:54.071041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:38.335 [2024-12-09 23:06:54.071113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:38.335 [2024-12-09 23:06:54.071125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:38.335 [2024-12-09 23:06:54.071202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.335 pt1 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.335 "name": "raid_bdev1", 00:25:38.335 "uuid": "ba43654d-185a-445a-a0c3-1cb7747e8365", 00:25:38.335 "strip_size_kb": 0, 00:25:38.335 "state": "online", 00:25:38.335 "raid_level": "raid1", 00:25:38.335 "superblock": true, 00:25:38.335 "num_base_bdevs": 2, 00:25:38.335 "num_base_bdevs_discovered": 1, 00:25:38.335 "num_base_bdevs_operational": 1, 00:25:38.335 "base_bdevs_list": [ 00:25:38.335 { 00:25:38.335 "name": null, 00:25:38.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.335 "is_configured": false, 00:25:38.335 "data_offset": 256, 00:25:38.335 "data_size": 7936 00:25:38.335 }, 00:25:38.335 { 00:25:38.335 "name": "pt2", 00:25:38.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:38.335 "is_configured": true, 00:25:38.335 "data_offset": 256, 00:25:38.335 "data_size": 7936 00:25:38.335 } 00:25:38.335 ] 00:25:38.335 }' 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.335 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:38.901 [2024-12-09 23:06:54.619165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' ba43654d-185a-445a-a0c3-1cb7747e8365 '!=' ba43654d-185a-445a-a0c3-1cb7747e8365 ']' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89456 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89456 ']' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89456 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89456 00:25:38.901 killing process with pid 89456 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89456' 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89456 00:25:38.901 [2024-12-09 23:06:54.689298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:38.901 23:06:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89456 00:25:38.901 [2024-12-09 23:06:54.689405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.901 [2024-12-09 23:06:54.689461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.901 [2024-12-09 23:06:54.689491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:39.158 [2024-12-09 23:06:54.937597] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.532 23:06:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:25:40.532 00:25:40.532 real 0m6.631s 00:25:40.532 user 0m9.994s 00:25:40.532 sys 0m1.202s 00:25:40.532 ************************************ 00:25:40.532 END TEST raid_superblock_test_md_interleaved 00:25:40.532 ************************************ 00:25:40.532 23:06:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.532 23:06:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.532 23:06:56 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:40.532 23:06:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:40.532 23:06:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.532 23:06:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.532 ************************************ 00:25:40.532 START TEST raid_rebuild_test_sb_md_interleaved 00:25:40.532 ************************************ 00:25:40.532 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:40.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89783 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89783 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89783 ']' 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.533 23:06:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.792 [2024-12-09 23:06:56.392332] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:25:40.792 [2024-12-09 23:06:56.393108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89783 ] 00:25:40.792 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:40.792 Zero copy mechanism will not be used. 00:25:40.792 [2024-12-09 23:06:56.575694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.065 [2024-12-09 23:06:56.706460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.326 [2024-12-09 23:06:56.941715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.326 [2024-12-09 23:06:56.941879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.586 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.586 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:41.586 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.586 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:41.586 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.587 BaseBdev1_malloc 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.587 [2024-12-09 23:06:57.387224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:41.587 [2024-12-09 23:06:57.387314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.587 [2024-12-09 23:06:57.387344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:41.587 [2024-12-09 23:06:57.387359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.587 [2024-12-09 23:06:57.389660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.587 [2024-12-09 23:06:57.389714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:41.587 BaseBdev1 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.587 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.846 BaseBdev2_malloc 00:25:41.846 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.846 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:41.846 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.846 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.847 [2024-12-09 23:06:57.451313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:41.847 [2024-12-09 23:06:57.451397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.847 [2024-12-09 23:06:57.451420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:41.847 [2024-12-09 23:06:57.451435] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.847 [2024-12-09 23:06:57.453630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.847 [2024-12-09 23:06:57.453727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.847 BaseBdev2 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.847 spare_malloc 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.847 spare_delay 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.847 [2024-12-09 23:06:57.537012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:41.847 [2024-12-09 23:06:57.537173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.847 [2024-12-09 23:06:57.537242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:41.847 [2024-12-09 23:06:57.537280] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.847 [2024-12-09 23:06:57.539578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.847 [2024-12-09 23:06:57.539674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:41.847 spare 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.847 [2024-12-09 23:06:57.549041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.847 [2024-12-09 23:06:57.551229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.847 [2024-12-09 23:06:57.551557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:41.847 [2024-12-09 23:06:57.551619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:41.847 [2024-12-09 23:06:57.551752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:41.847 [2024-12-09 23:06:57.551873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:41.847 [2024-12-09 23:06:57.551915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:41.847 [2024-12-09 23:06:57.552058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.847 "name": "raid_bdev1", 00:25:41.847 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:41.847 "strip_size_kb": 0, 00:25:41.847 "state": "online", 00:25:41.847 "raid_level": "raid1", 00:25:41.847 "superblock": true, 00:25:41.847 "num_base_bdevs": 2, 00:25:41.847 "num_base_bdevs_discovered": 2, 00:25:41.847 "num_base_bdevs_operational": 2, 00:25:41.847 "base_bdevs_list": [ 00:25:41.847 { 00:25:41.847 "name": "BaseBdev1", 00:25:41.847 "uuid": "04b5fe3b-b68b-549e-8174-bfaa58cb21b7", 00:25:41.847 "is_configured": true, 00:25:41.847 "data_offset": 256, 00:25:41.847 "data_size": 7936 00:25:41.847 }, 00:25:41.847 { 00:25:41.847 "name": "BaseBdev2", 00:25:41.847 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:41.847 "is_configured": true, 00:25:41.847 "data_offset": 256, 00:25:41.847 "data_size": 7936 00:25:41.847 } 00:25:41.847 ] 00:25:41.847 }' 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.847 23:06:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.418 [2024-12-09 23:06:58.060703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.418 [2024-12-09 23:06:58.156146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:42.418 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.419 "name": "raid_bdev1", 00:25:42.419 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:42.419 "strip_size_kb": 0, 00:25:42.419 "state": "online", 00:25:42.419 "raid_level": "raid1", 00:25:42.419 "superblock": true, 00:25:42.419 "num_base_bdevs": 2, 00:25:42.419 "num_base_bdevs_discovered": 1, 00:25:42.419 "num_base_bdevs_operational": 1, 00:25:42.419 "base_bdevs_list": [ 00:25:42.419 { 00:25:42.419 "name": null, 00:25:42.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.419 "is_configured": false, 00:25:42.419 "data_offset": 0, 00:25:42.419 "data_size": 7936 00:25:42.419 }, 00:25:42.419 { 00:25:42.419 "name": "BaseBdev2", 00:25:42.419 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:42.419 "is_configured": true, 00:25:42.419 "data_offset": 256, 00:25:42.419 "data_size": 7936 00:25:42.419 } 00:25:42.419 ] 00:25:42.419 }' 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.419 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.986 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:42.986 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.986 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.986 [2024-12-09 23:06:58.631363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:42.986 [2024-12-09 23:06:58.652314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:42.986 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.986 23:06:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:42.986 [2024-12-09 23:06:58.654579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:43.923 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:43.924 "name": "raid_bdev1", 00:25:43.924 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:43.924 "strip_size_kb": 0, 00:25:43.924 "state": "online", 00:25:43.924 "raid_level": "raid1", 00:25:43.924 "superblock": true, 00:25:43.924 "num_base_bdevs": 2, 00:25:43.924 "num_base_bdevs_discovered": 2, 00:25:43.924 "num_base_bdevs_operational": 2, 00:25:43.924 "process": { 00:25:43.924 "type": "rebuild", 00:25:43.924 "target": "spare", 00:25:43.924 "progress": { 00:25:43.924 "blocks": 2560, 00:25:43.924 "percent": 32 00:25:43.924 } 00:25:43.924 }, 00:25:43.924 "base_bdevs_list": [ 00:25:43.924 { 00:25:43.924 "name": "spare", 00:25:43.924 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:43.924 "is_configured": true, 00:25:43.924 "data_offset": 256, 00:25:43.924 "data_size": 7936 00:25:43.924 }, 00:25:43.924 { 00:25:43.924 "name": "BaseBdev2", 00:25:43.924 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:43.924 "is_configured": true, 00:25:43.924 "data_offset": 256, 00:25:43.924 "data_size": 7936 00:25:43.924 } 00:25:43.924 ] 00:25:43.924 }' 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.924 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.183 [2024-12-09 23:06:59.785838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.183 [2024-12-09 23:06:59.860944] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:44.183 [2024-12-09 23:06:59.861158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.183 [2024-12-09 23:06:59.861210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.183 [2024-12-09 23:06:59.861245] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.183 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.183 "name": "raid_bdev1", 00:25:44.183 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:44.183 "strip_size_kb": 0, 00:25:44.183 "state": "online", 00:25:44.183 "raid_level": "raid1", 00:25:44.183 "superblock": true, 00:25:44.183 "num_base_bdevs": 2, 00:25:44.183 "num_base_bdevs_discovered": 1, 00:25:44.183 "num_base_bdevs_operational": 1, 00:25:44.183 "base_bdevs_list": [ 00:25:44.183 { 00:25:44.183 "name": null, 00:25:44.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.183 "is_configured": false, 00:25:44.184 "data_offset": 0, 00:25:44.184 "data_size": 7936 00:25:44.184 }, 00:25:44.184 { 00:25:44.184 "name": "BaseBdev2", 00:25:44.184 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:44.184 "is_configured": true, 00:25:44.184 "data_offset": 256, 00:25:44.184 "data_size": 7936 00:25:44.184 } 00:25:44.184 ] 00:25:44.184 }' 00:25:44.184 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.184 23:06:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.751 "name": "raid_bdev1", 00:25:44.751 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:44.751 "strip_size_kb": 0, 00:25:44.751 "state": "online", 00:25:44.751 "raid_level": "raid1", 00:25:44.751 "superblock": true, 00:25:44.751 "num_base_bdevs": 2, 00:25:44.751 "num_base_bdevs_discovered": 1, 00:25:44.751 "num_base_bdevs_operational": 1, 00:25:44.751 "base_bdevs_list": [ 00:25:44.751 { 00:25:44.751 "name": null, 00:25:44.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.751 "is_configured": false, 00:25:44.751 "data_offset": 0, 00:25:44.751 "data_size": 7936 00:25:44.751 }, 00:25:44.751 { 00:25:44.751 "name": "BaseBdev2", 00:25:44.751 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:44.751 "is_configured": true, 00:25:44.751 "data_offset": 256, 00:25:44.751 "data_size": 7936 00:25:44.751 } 00:25:44.751 ] 00:25:44.751 }' 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.751 [2024-12-09 23:07:00.521521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.751 [2024-12-09 23:07:00.541351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.751 23:07:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:44.751 [2024-12-09 23:07:00.543694] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.129 "name": "raid_bdev1", 00:25:46.129 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:46.129 "strip_size_kb": 0, 00:25:46.129 "state": "online", 00:25:46.129 "raid_level": "raid1", 00:25:46.129 "superblock": true, 00:25:46.129 "num_base_bdevs": 2, 00:25:46.129 "num_base_bdevs_discovered": 2, 00:25:46.129 "num_base_bdevs_operational": 2, 00:25:46.129 "process": { 00:25:46.129 "type": "rebuild", 00:25:46.129 "target": "spare", 00:25:46.129 "progress": { 00:25:46.129 "blocks": 2560, 00:25:46.129 "percent": 32 00:25:46.129 } 00:25:46.129 }, 00:25:46.129 "base_bdevs_list": [ 00:25:46.129 { 00:25:46.129 "name": "spare", 00:25:46.129 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:46.129 "is_configured": true, 00:25:46.129 "data_offset": 256, 00:25:46.129 "data_size": 7936 00:25:46.129 }, 00:25:46.129 { 00:25:46.129 "name": "BaseBdev2", 00:25:46.129 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:46.129 "is_configured": true, 00:25:46.129 "data_offset": 256, 00:25:46.129 "data_size": 7936 00:25:46.129 } 00:25:46.129 ] 00:25:46.129 }' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:46.129 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=779 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.129 "name": "raid_bdev1", 00:25:46.129 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:46.129 "strip_size_kb": 0, 00:25:46.129 "state": "online", 00:25:46.129 "raid_level": "raid1", 00:25:46.129 "superblock": true, 00:25:46.129 "num_base_bdevs": 2, 00:25:46.129 "num_base_bdevs_discovered": 2, 00:25:46.129 "num_base_bdevs_operational": 2, 00:25:46.129 "process": { 00:25:46.129 "type": "rebuild", 00:25:46.129 "target": "spare", 00:25:46.129 "progress": { 00:25:46.129 "blocks": 2816, 00:25:46.129 "percent": 35 00:25:46.129 } 00:25:46.129 }, 00:25:46.129 "base_bdevs_list": [ 00:25:46.129 { 00:25:46.129 "name": "spare", 00:25:46.129 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:46.129 "is_configured": true, 00:25:46.129 "data_offset": 256, 00:25:46.129 "data_size": 7936 00:25:46.129 }, 00:25:46.129 { 00:25:46.129 "name": "BaseBdev2", 00:25:46.129 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:46.129 "is_configured": true, 00:25:46.129 "data_offset": 256, 00:25:46.129 "data_size": 7936 00:25:46.129 } 00:25:46.129 ] 00:25:46.129 }' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.129 23:07:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:47.066 "name": "raid_bdev1", 00:25:47.066 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:47.066 "strip_size_kb": 0, 00:25:47.066 "state": "online", 00:25:47.066 "raid_level": "raid1", 00:25:47.066 "superblock": true, 00:25:47.066 "num_base_bdevs": 2, 00:25:47.066 "num_base_bdevs_discovered": 2, 00:25:47.066 "num_base_bdevs_operational": 2, 00:25:47.066 "process": { 00:25:47.066 "type": "rebuild", 00:25:47.066 "target": "spare", 00:25:47.066 "progress": { 00:25:47.066 "blocks": 5632, 00:25:47.066 "percent": 70 00:25:47.066 } 00:25:47.066 }, 00:25:47.066 "base_bdevs_list": [ 00:25:47.066 { 00:25:47.066 "name": "spare", 00:25:47.066 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:47.066 "is_configured": true, 00:25:47.066 "data_offset": 256, 00:25:47.066 "data_size": 7936 00:25:47.066 }, 00:25:47.066 { 00:25:47.066 "name": "BaseBdev2", 00:25:47.066 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:47.066 "is_configured": true, 00:25:47.066 "data_offset": 256, 00:25:47.066 "data_size": 7936 00:25:47.066 } 00:25:47.066 ] 00:25:47.066 }' 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.066 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:47.325 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.325 23:07:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:47.978 [2024-12-09 23:07:03.660199] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:47.978 [2024-12-09 23:07:03.660300] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:47.978 [2024-12-09 23:07:03.660454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.237 23:07:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.237 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.237 "name": "raid_bdev1", 00:25:48.237 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:48.237 "strip_size_kb": 0, 00:25:48.237 "state": "online", 00:25:48.237 "raid_level": "raid1", 00:25:48.237 "superblock": true, 00:25:48.237 "num_base_bdevs": 2, 00:25:48.237 "num_base_bdevs_discovered": 2, 00:25:48.237 "num_base_bdevs_operational": 2, 00:25:48.237 "base_bdevs_list": [ 00:25:48.237 { 00:25:48.237 "name": "spare", 00:25:48.237 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:48.237 "is_configured": true, 00:25:48.237 "data_offset": 256, 00:25:48.237 "data_size": 7936 00:25:48.237 }, 00:25:48.237 { 00:25:48.237 "name": "BaseBdev2", 00:25:48.237 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:48.237 "is_configured": true, 00:25:48.237 "data_offset": 256, 00:25:48.237 "data_size": 7936 00:25:48.237 } 00:25:48.237 ] 00:25:48.238 }' 00:25:48.238 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.238 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:48.238 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.497 "name": "raid_bdev1", 00:25:48.497 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:48.497 "strip_size_kb": 0, 00:25:48.497 "state": "online", 00:25:48.497 "raid_level": "raid1", 00:25:48.497 "superblock": true, 00:25:48.497 "num_base_bdevs": 2, 00:25:48.497 "num_base_bdevs_discovered": 2, 00:25:48.497 "num_base_bdevs_operational": 2, 00:25:48.497 "base_bdevs_list": [ 00:25:48.497 { 00:25:48.497 "name": "spare", 00:25:48.497 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:48.497 "is_configured": true, 00:25:48.497 "data_offset": 256, 00:25:48.497 "data_size": 7936 00:25:48.497 }, 00:25:48.497 { 00:25:48.497 "name": "BaseBdev2", 00:25:48.497 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:48.497 "is_configured": true, 00:25:48.497 "data_offset": 256, 00:25:48.497 "data_size": 7936 00:25:48.497 } 00:25:48.497 ] 00:25:48.497 }' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.497 "name": "raid_bdev1", 00:25:48.497 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:48.497 "strip_size_kb": 0, 00:25:48.497 "state": "online", 00:25:48.497 "raid_level": "raid1", 00:25:48.497 "superblock": true, 00:25:48.497 "num_base_bdevs": 2, 00:25:48.497 "num_base_bdevs_discovered": 2, 00:25:48.497 "num_base_bdevs_operational": 2, 00:25:48.497 "base_bdevs_list": [ 00:25:48.497 { 00:25:48.497 "name": "spare", 00:25:48.497 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:48.497 "is_configured": true, 00:25:48.497 "data_offset": 256, 00:25:48.497 "data_size": 7936 00:25:48.497 }, 00:25:48.497 { 00:25:48.497 "name": "BaseBdev2", 00:25:48.497 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:48.497 "is_configured": true, 00:25:48.497 "data_offset": 256, 00:25:48.497 "data_size": 7936 00:25:48.497 } 00:25:48.497 ] 00:25:48.497 }' 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.497 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.065 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:49.065 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.065 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.065 [2024-12-09 23:07:04.727563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:49.065 [2024-12-09 23:07:04.727675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:49.065 [2024-12-09 23:07:04.727810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.065 [2024-12-09 23:07:04.727922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.065 [2024-12-09 23:07:04.727977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:49.065 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.065 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.065 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.066 [2024-12-09 23:07:04.807419] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:49.066 [2024-12-09 23:07:04.807603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.066 [2024-12-09 23:07:04.807662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:49.066 [2024-12-09 23:07:04.807702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.066 [2024-12-09 23:07:04.810119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.066 [2024-12-09 23:07:04.810241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:49.066 [2024-12-09 23:07:04.810370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:49.066 [2024-12-09 23:07:04.810496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.066 [2024-12-09 23:07:04.810705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:49.066 spare 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.066 [2024-12-09 23:07:04.910688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:49.066 [2024-12-09 23:07:04.910851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:49.066 [2024-12-09 23:07:04.911037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:49.066 [2024-12-09 23:07:04.911209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:49.066 [2024-12-09 23:07:04.911258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:49.066 [2024-12-09 23:07:04.911448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.066 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.329 "name": "raid_bdev1", 00:25:49.329 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:49.329 "strip_size_kb": 0, 00:25:49.329 "state": "online", 00:25:49.329 "raid_level": "raid1", 00:25:49.329 "superblock": true, 00:25:49.329 "num_base_bdevs": 2, 00:25:49.329 "num_base_bdevs_discovered": 2, 00:25:49.329 "num_base_bdevs_operational": 2, 00:25:49.329 "base_bdevs_list": [ 00:25:49.329 { 00:25:49.329 "name": "spare", 00:25:49.329 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:49.329 "is_configured": true, 00:25:49.329 "data_offset": 256, 00:25:49.329 "data_size": 7936 00:25:49.329 }, 00:25:49.329 { 00:25:49.329 "name": "BaseBdev2", 00:25:49.329 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:49.329 "is_configured": true, 00:25:49.329 "data_offset": 256, 00:25:49.329 "data_size": 7936 00:25:49.329 } 00:25:49.329 ] 00:25:49.329 }' 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.329 23:07:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.589 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:49.850 "name": "raid_bdev1", 00:25:49.850 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:49.850 "strip_size_kb": 0, 00:25:49.850 "state": "online", 00:25:49.850 "raid_level": "raid1", 00:25:49.850 "superblock": true, 00:25:49.850 "num_base_bdevs": 2, 00:25:49.850 "num_base_bdevs_discovered": 2, 00:25:49.850 "num_base_bdevs_operational": 2, 00:25:49.850 "base_bdevs_list": [ 00:25:49.850 { 00:25:49.850 "name": "spare", 00:25:49.850 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:49.850 "is_configured": true, 00:25:49.850 "data_offset": 256, 00:25:49.850 "data_size": 7936 00:25:49.850 }, 00:25:49.850 { 00:25:49.850 "name": "BaseBdev2", 00:25:49.850 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:49.850 "is_configured": true, 00:25:49.850 "data_offset": 256, 00:25:49.850 "data_size": 7936 00:25:49.850 } 00:25:49.850 ] 00:25:49.850 }' 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.850 [2024-12-09 23:07:05.610408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.850 "name": "raid_bdev1", 00:25:49.850 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:49.850 "strip_size_kb": 0, 00:25:49.850 "state": "online", 00:25:49.850 "raid_level": "raid1", 00:25:49.850 "superblock": true, 00:25:49.850 "num_base_bdevs": 2, 00:25:49.850 "num_base_bdevs_discovered": 1, 00:25:49.850 "num_base_bdevs_operational": 1, 00:25:49.850 "base_bdevs_list": [ 00:25:49.850 { 00:25:49.850 "name": null, 00:25:49.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.850 "is_configured": false, 00:25:49.850 "data_offset": 0, 00:25:49.850 "data_size": 7936 00:25:49.850 }, 00:25:49.850 { 00:25:49.850 "name": "BaseBdev2", 00:25:49.850 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:49.850 "is_configured": true, 00:25:49.850 "data_offset": 256, 00:25:49.850 "data_size": 7936 00:25:49.850 } 00:25:49.850 ] 00:25:49.850 }' 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.850 23:07:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.417 23:07:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:50.417 23:07:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.417 23:07:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.417 [2024-12-09 23:07:06.069643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.417 [2024-12-09 23:07:06.069971] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:50.417 [2024-12-09 23:07:06.070055] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:50.417 [2024-12-09 23:07:06.070165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.417 [2024-12-09 23:07:06.089561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:50.417 23:07:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.417 23:07:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:50.417 [2024-12-09 23:07:06.091880] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.356 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.356 "name": "raid_bdev1", 00:25:51.356 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:51.356 "strip_size_kb": 0, 00:25:51.356 "state": "online", 00:25:51.356 "raid_level": "raid1", 00:25:51.356 "superblock": true, 00:25:51.356 "num_base_bdevs": 2, 00:25:51.356 "num_base_bdevs_discovered": 2, 00:25:51.356 "num_base_bdevs_operational": 2, 00:25:51.356 "process": { 00:25:51.356 "type": "rebuild", 00:25:51.356 "target": "spare", 00:25:51.356 "progress": { 00:25:51.356 "blocks": 2560, 00:25:51.356 "percent": 32 00:25:51.356 } 00:25:51.356 }, 00:25:51.356 "base_bdevs_list": [ 00:25:51.356 { 00:25:51.356 "name": "spare", 00:25:51.356 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:51.356 "is_configured": true, 00:25:51.356 "data_offset": 256, 00:25:51.356 "data_size": 7936 00:25:51.356 }, 00:25:51.356 { 00:25:51.356 "name": "BaseBdev2", 00:25:51.356 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:51.357 "is_configured": true, 00:25:51.357 "data_offset": 256, 00:25:51.357 "data_size": 7936 00:25:51.357 } 00:25:51.357 ] 00:25:51.357 }' 00:25:51.357 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.357 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.357 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.617 [2024-12-09 23:07:07.255265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.617 [2024-12-09 23:07:07.298321] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:51.617 [2024-12-09 23:07:07.298559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.617 [2024-12-09 23:07:07.298589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.617 [2024-12-09 23:07:07.298606] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.617 "name": "raid_bdev1", 00:25:51.617 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:51.617 "strip_size_kb": 0, 00:25:51.617 "state": "online", 00:25:51.617 "raid_level": "raid1", 00:25:51.617 "superblock": true, 00:25:51.617 "num_base_bdevs": 2, 00:25:51.617 "num_base_bdevs_discovered": 1, 00:25:51.617 "num_base_bdevs_operational": 1, 00:25:51.617 "base_bdevs_list": [ 00:25:51.617 { 00:25:51.617 "name": null, 00:25:51.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.617 "is_configured": false, 00:25:51.617 "data_offset": 0, 00:25:51.617 "data_size": 7936 00:25:51.617 }, 00:25:51.617 { 00:25:51.617 "name": "BaseBdev2", 00:25:51.617 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:51.617 "is_configured": true, 00:25:51.617 "data_offset": 256, 00:25:51.617 "data_size": 7936 00:25:51.617 } 00:25:51.617 ] 00:25:51.617 }' 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.617 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.184 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:52.184 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.184 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.184 [2024-12-09 23:07:07.826430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:52.184 [2024-12-09 23:07:07.826611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.184 [2024-12-09 23:07:07.826684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:52.184 [2024-12-09 23:07:07.826726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.184 [2024-12-09 23:07:07.826987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.184 [2024-12-09 23:07:07.827046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:52.184 [2024-12-09 23:07:07.827154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:52.184 [2024-12-09 23:07:07.827201] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:52.184 [2024-12-09 23:07:07.827251] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:52.184 [2024-12-09 23:07:07.827317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:52.184 [2024-12-09 23:07:07.846179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:52.184 spare 00:25:52.184 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.184 23:07:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:52.184 [2024-12-09 23:07:07.848520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.123 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:53.123 "name": "raid_bdev1", 00:25:53.123 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:53.123 "strip_size_kb": 0, 00:25:53.123 "state": "online", 00:25:53.123 "raid_level": "raid1", 00:25:53.123 "superblock": true, 00:25:53.123 "num_base_bdevs": 2, 00:25:53.123 "num_base_bdevs_discovered": 2, 00:25:53.123 "num_base_bdevs_operational": 2, 00:25:53.123 "process": { 00:25:53.123 "type": "rebuild", 00:25:53.123 "target": "spare", 00:25:53.123 "progress": { 00:25:53.123 "blocks": 2560, 00:25:53.123 "percent": 32 00:25:53.123 } 00:25:53.123 }, 00:25:53.123 "base_bdevs_list": [ 00:25:53.123 { 00:25:53.123 "name": "spare", 00:25:53.123 "uuid": "409be2b9-50b1-5cde-bb73-d2d53f5f9836", 00:25:53.123 "is_configured": true, 00:25:53.123 "data_offset": 256, 00:25:53.123 "data_size": 7936 00:25:53.123 }, 00:25:53.123 { 00:25:53.123 "name": "BaseBdev2", 00:25:53.123 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:53.124 "is_configured": true, 00:25:53.124 "data_offset": 256, 00:25:53.124 "data_size": 7936 00:25:53.124 } 00:25:53.124 ] 00:25:53.124 }' 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.124 23:07:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.124 [2024-12-09 23:07:08.976836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:53.384 [2024-12-09 23:07:09.055209] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:53.384 [2024-12-09 23:07:09.055448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.384 [2024-12-09 23:07:09.055533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:53.384 [2024-12-09 23:07:09.055585] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.384 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.384 "name": "raid_bdev1", 00:25:53.384 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:53.384 "strip_size_kb": 0, 00:25:53.384 "state": "online", 00:25:53.384 "raid_level": "raid1", 00:25:53.384 "superblock": true, 00:25:53.384 "num_base_bdevs": 2, 00:25:53.384 "num_base_bdevs_discovered": 1, 00:25:53.384 "num_base_bdevs_operational": 1, 00:25:53.384 "base_bdevs_list": [ 00:25:53.384 { 00:25:53.384 "name": null, 00:25:53.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.385 "is_configured": false, 00:25:53.385 "data_offset": 0, 00:25:53.385 "data_size": 7936 00:25:53.385 }, 00:25:53.385 { 00:25:53.385 "name": "BaseBdev2", 00:25:53.385 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:53.385 "is_configured": true, 00:25:53.385 "data_offset": 256, 00:25:53.385 "data_size": 7936 00:25:53.385 } 00:25:53.385 ] 00:25:53.385 }' 00:25:53.385 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.385 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.955 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:53.955 "name": "raid_bdev1", 00:25:53.955 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:53.955 "strip_size_kb": 0, 00:25:53.955 "state": "online", 00:25:53.955 "raid_level": "raid1", 00:25:53.955 "superblock": true, 00:25:53.955 "num_base_bdevs": 2, 00:25:53.955 "num_base_bdevs_discovered": 1, 00:25:53.956 "num_base_bdevs_operational": 1, 00:25:53.956 "base_bdevs_list": [ 00:25:53.956 { 00:25:53.956 "name": null, 00:25:53.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.956 "is_configured": false, 00:25:53.956 "data_offset": 0, 00:25:53.956 "data_size": 7936 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "name": "BaseBdev2", 00:25:53.956 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:53.956 "is_configured": true, 00:25:53.956 "data_offset": 256, 00:25:53.956 "data_size": 7936 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }' 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.956 [2024-12-09 23:07:09.750159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:53.956 [2024-12-09 23:07:09.750333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.956 [2024-12-09 23:07:09.750391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:53.956 [2024-12-09 23:07:09.750432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.956 [2024-12-09 23:07:09.750705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.956 [2024-12-09 23:07:09.750765] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:53.956 [2024-12-09 23:07:09.750870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:53.956 [2024-12-09 23:07:09.750915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:53.956 [2024-12-09 23:07:09.750962] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:53.956 [2024-12-09 23:07:09.751008] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:53.956 BaseBdev1 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.956 23:07:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.335 "name": "raid_bdev1", 00:25:55.335 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:55.335 "strip_size_kb": 0, 00:25:55.335 "state": "online", 00:25:55.335 "raid_level": "raid1", 00:25:55.335 "superblock": true, 00:25:55.335 "num_base_bdevs": 2, 00:25:55.335 "num_base_bdevs_discovered": 1, 00:25:55.335 "num_base_bdevs_operational": 1, 00:25:55.335 "base_bdevs_list": [ 00:25:55.335 { 00:25:55.335 "name": null, 00:25:55.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.335 "is_configured": false, 00:25:55.335 "data_offset": 0, 00:25:55.335 "data_size": 7936 00:25:55.335 }, 00:25:55.335 { 00:25:55.335 "name": "BaseBdev2", 00:25:55.335 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:55.335 "is_configured": true, 00:25:55.335 "data_offset": 256, 00:25:55.335 "data_size": 7936 00:25:55.335 } 00:25:55.335 ] 00:25:55.335 }' 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.335 23:07:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:55.595 "name": "raid_bdev1", 00:25:55.595 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:55.595 "strip_size_kb": 0, 00:25:55.595 "state": "online", 00:25:55.595 "raid_level": "raid1", 00:25:55.595 "superblock": true, 00:25:55.595 "num_base_bdevs": 2, 00:25:55.595 "num_base_bdevs_discovered": 1, 00:25:55.595 "num_base_bdevs_operational": 1, 00:25:55.595 "base_bdevs_list": [ 00:25:55.595 { 00:25:55.595 "name": null, 00:25:55.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.595 "is_configured": false, 00:25:55.595 "data_offset": 0, 00:25:55.595 "data_size": 7936 00:25:55.595 }, 00:25:55.595 { 00:25:55.595 "name": "BaseBdev2", 00:25:55.595 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:55.595 "is_configured": true, 00:25:55.595 "data_offset": 256, 00:25:55.595 "data_size": 7936 00:25:55.595 } 00:25:55.595 ] 00:25:55.595 }' 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.595 [2024-12-09 23:07:11.364807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.595 [2024-12-09 23:07:11.365079] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:55.595 [2024-12-09 23:07:11.365155] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:55.595 request: 00:25:55.595 { 00:25:55.595 "base_bdev": "BaseBdev1", 00:25:55.595 "raid_bdev": "raid_bdev1", 00:25:55.595 "method": "bdev_raid_add_base_bdev", 00:25:55.595 "req_id": 1 00:25:55.595 } 00:25:55.595 Got JSON-RPC error response 00:25:55.595 response: 00:25:55.595 { 00:25:55.595 "code": -22, 00:25:55.595 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:55.595 } 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.595 23:07:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:56.539 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.802 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.802 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.802 "name": "raid_bdev1", 00:25:56.802 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:56.802 "strip_size_kb": 0, 00:25:56.802 "state": "online", 00:25:56.802 "raid_level": "raid1", 00:25:56.802 "superblock": true, 00:25:56.802 "num_base_bdevs": 2, 00:25:56.802 "num_base_bdevs_discovered": 1, 00:25:56.802 "num_base_bdevs_operational": 1, 00:25:56.802 "base_bdevs_list": [ 00:25:56.802 { 00:25:56.802 "name": null, 00:25:56.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.802 "is_configured": false, 00:25:56.802 "data_offset": 0, 00:25:56.802 "data_size": 7936 00:25:56.802 }, 00:25:56.802 { 00:25:56.802 "name": "BaseBdev2", 00:25:56.802 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:56.802 "is_configured": true, 00:25:56.802 "data_offset": 256, 00:25:56.802 "data_size": 7936 00:25:56.802 } 00:25:56.802 ] 00:25:56.802 }' 00:25:56.802 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.802 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:57.062 "name": "raid_bdev1", 00:25:57.062 "uuid": "b36e715e-c954-4002-ae57-4ae8d41326b8", 00:25:57.062 "strip_size_kb": 0, 00:25:57.062 "state": "online", 00:25:57.062 "raid_level": "raid1", 00:25:57.062 "superblock": true, 00:25:57.062 "num_base_bdevs": 2, 00:25:57.062 "num_base_bdevs_discovered": 1, 00:25:57.062 "num_base_bdevs_operational": 1, 00:25:57.062 "base_bdevs_list": [ 00:25:57.062 { 00:25:57.062 "name": null, 00:25:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.062 "is_configured": false, 00:25:57.062 "data_offset": 0, 00:25:57.062 "data_size": 7936 00:25:57.062 }, 00:25:57.062 { 00:25:57.062 "name": "BaseBdev2", 00:25:57.062 "uuid": "eb42af37-62d6-58b9-9a73-4681f6bae9dc", 00:25:57.062 "is_configured": true, 00:25:57.062 "data_offset": 256, 00:25:57.062 "data_size": 7936 00:25:57.062 } 00:25:57.062 ] 00:25:57.062 }' 00:25:57.062 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:57.322 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:57.322 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:57.322 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:57.322 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89783 00:25:57.322 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89783 ']' 00:25:57.322 23:07:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89783 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89783 00:25:57.322 killing process with pid 89783 00:25:57.322 Received shutdown signal, test time was about 60.000000 seconds 00:25:57.322 00:25:57.322 Latency(us) 00:25:57.322 [2024-12-09T23:07:13.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.322 [2024-12-09T23:07:13.178Z] =================================================================================================================== 00:25:57.322 [2024-12-09T23:07:13.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89783' 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89783 00:25:57.322 23:07:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89783 00:25:57.322 [2024-12-09 23:07:13.038263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:57.322 [2024-12-09 23:07:13.038410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.322 [2024-12-09 23:07:13.038499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:57.322 [2024-12-09 23:07:13.038514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:57.581 [2024-12-09 23:07:13.404125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:58.976 ************************************ 00:25:58.976 END TEST raid_rebuild_test_sb_md_interleaved 00:25:58.976 ************************************ 00:25:58.976 23:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:25:58.976 00:25:58.976 real 0m18.369s 00:25:58.976 user 0m24.248s 00:25:58.976 sys 0m1.662s 00:25:58.976 23:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.976 23:07:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:58.976 23:07:14 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:25:58.976 23:07:14 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:25:58.976 23:07:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89783 ']' 00:25:58.976 23:07:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89783 00:25:58.976 23:07:14 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:25:58.976 ************************************ 00:25:58.976 END TEST bdev_raid 00:25:58.976 ************************************ 00:25:58.976 00:25:58.976 real 12m42.468s 00:25:58.976 user 17m2.007s 00:25:58.976 sys 2m1.757s 00:25:58.976 23:07:14 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.976 23:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:58.976 23:07:14 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:58.976 23:07:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:58.976 23:07:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:58.976 23:07:14 -- common/autotest_common.sh@10 -- # set +x 00:25:58.976 ************************************ 00:25:58.976 START TEST spdkcli_raid 00:25:58.976 ************************************ 00:25:58.976 23:07:14 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:59.235 * Looking for test storage... 00:25:59.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:59.235 23:07:14 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:59.235 23:07:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:59.235 23:07:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:25:59.235 23:07:14 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:59.235 23:07:14 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.235 23:07:14 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.235 23:07:14 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.235 23:07:14 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.235 23:07:14 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.235 23:07:14 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.235 23:07:15 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:59.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.235 --rc genhtml_branch_coverage=1 00:25:59.235 --rc genhtml_function_coverage=1 00:25:59.235 --rc genhtml_legend=1 00:25:59.235 --rc geninfo_all_blocks=1 00:25:59.235 --rc geninfo_unexecuted_blocks=1 00:25:59.235 00:25:59.235 ' 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:59.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.235 --rc genhtml_branch_coverage=1 00:25:59.235 --rc genhtml_function_coverage=1 00:25:59.235 --rc genhtml_legend=1 00:25:59.235 --rc geninfo_all_blocks=1 00:25:59.235 --rc geninfo_unexecuted_blocks=1 00:25:59.235 00:25:59.235 ' 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:59.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.235 --rc genhtml_branch_coverage=1 00:25:59.235 --rc genhtml_function_coverage=1 00:25:59.235 --rc genhtml_legend=1 00:25:59.235 --rc geninfo_all_blocks=1 00:25:59.235 --rc geninfo_unexecuted_blocks=1 00:25:59.235 00:25:59.235 ' 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:59.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.235 --rc genhtml_branch_coverage=1 00:25:59.235 --rc genhtml_function_coverage=1 00:25:59.235 --rc genhtml_legend=1 00:25:59.235 --rc geninfo_all_blocks=1 00:25:59.235 --rc geninfo_unexecuted_blocks=1 00:25:59.235 00:25:59.235 ' 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:25:59.235 23:07:15 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90470 00:25:59.235 23:07:15 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90470 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90470 ']' 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:59.235 23:07:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.493 [2024-12-09 23:07:15.182679] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:25:59.493 [2024-12-09 23:07:15.182959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90470 ] 00:25:59.751 [2024-12-09 23:07:15.368799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:59.751 [2024-12-09 23:07:15.510888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.751 [2024-12-09 23:07:15.510921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.132 23:07:16 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.132 23:07:16 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:26:01.132 23:07:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:26:01.132 23:07:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.132 23:07:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.132 23:07:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:26:01.132 23:07:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.132 23:07:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.132 23:07:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:01.132 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:01.132 ' 00:26:02.509 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:26:02.509 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:26:02.509 23:07:18 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:26:02.509 23:07:18 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.509 23:07:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.509 23:07:18 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:26:02.509 23:07:18 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.509 23:07:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:02.510 23:07:18 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:26:02.510 ' 00:26:03.885 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:26:03.885 23:07:19 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:26:03.885 23:07:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.885 23:07:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:03.885 23:07:19 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:26:03.885 23:07:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.885 23:07:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:03.885 23:07:19 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:26:03.885 23:07:19 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:26:04.451 23:07:20 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:26:04.451 23:07:20 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:26:04.451 23:07:20 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:26:04.451 23:07:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:04.451 23:07:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.451 23:07:20 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:26:04.451 23:07:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.451 23:07:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.451 23:07:20 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:26:04.451 ' 00:26:05.386 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:26:05.646 23:07:21 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:26:05.646 23:07:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:05.646 23:07:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.646 23:07:21 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:26:05.646 23:07:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.646 23:07:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.646 23:07:21 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:26:05.646 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:26:05.646 ' 00:26:07.024 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:26:07.024 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:26:07.282 23:07:22 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:07.282 23:07:22 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90470 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90470 ']' 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90470 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90470 00:26:07.282 killing process with pid 90470 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90470' 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90470 00:26:07.282 23:07:22 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90470 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90470 ']' 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90470 00:26:10.652 Process with pid 90470 is not found 00:26:10.652 23:07:25 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90470 ']' 00:26:10.652 23:07:25 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90470 00:26:10.652 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90470) - No such process 00:26:10.652 23:07:25 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90470 is not found' 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:10.652 23:07:25 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:10.652 ************************************ 00:26:10.652 END TEST spdkcli_raid 00:26:10.652 ************************************ 00:26:10.652 00:26:10.652 real 0m11.058s 00:26:10.652 user 0m22.847s 00:26:10.652 sys 0m1.107s 00:26:10.652 23:07:25 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.652 23:07:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.652 23:07:25 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:10.652 23:07:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:10.652 23:07:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.652 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:26:10.652 ************************************ 00:26:10.652 START TEST blockdev_raid5f 00:26:10.652 ************************************ 00:26:10.652 23:07:25 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:10.652 * Looking for test storage... 00:26:10.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.652 23:07:26 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.652 --rc genhtml_branch_coverage=1 00:26:10.652 --rc genhtml_function_coverage=1 00:26:10.652 --rc genhtml_legend=1 00:26:10.652 --rc geninfo_all_blocks=1 00:26:10.652 --rc geninfo_unexecuted_blocks=1 00:26:10.652 00:26:10.652 ' 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.652 --rc genhtml_branch_coverage=1 00:26:10.652 --rc genhtml_function_coverage=1 00:26:10.652 --rc genhtml_legend=1 00:26:10.652 --rc geninfo_all_blocks=1 00:26:10.652 --rc geninfo_unexecuted_blocks=1 00:26:10.652 00:26:10.652 ' 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.652 --rc genhtml_branch_coverage=1 00:26:10.652 --rc genhtml_function_coverage=1 00:26:10.652 --rc genhtml_legend=1 00:26:10.652 --rc geninfo_all_blocks=1 00:26:10.652 --rc geninfo_unexecuted_blocks=1 00:26:10.652 00:26:10.652 ' 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:10.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.652 --rc genhtml_branch_coverage=1 00:26:10.652 --rc genhtml_function_coverage=1 00:26:10.652 --rc genhtml_legend=1 00:26:10.652 --rc geninfo_all_blocks=1 00:26:10.652 --rc geninfo_unexecuted_blocks=1 00:26:10.652 00:26:10.652 ' 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90752 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:10.652 23:07:26 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90752 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90752 ']' 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.652 23:07:26 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.653 23:07:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:10.653 [2024-12-09 23:07:26.267215] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:10.653 [2024-12-09 23:07:26.267530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90752 ] 00:26:10.653 [2024-12-09 23:07:26.465943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.910 [2024-12-09 23:07:26.605058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.846 23:07:27 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.846 23:07:27 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:26:11.846 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:26:11.846 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:26:11.846 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:26:11.846 23:07:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.846 23:07:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:11.846 Malloc0 00:26:12.105 Malloc1 00:26:12.105 Malloc2 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dd95d9ae-a7ae-43d9-b020-ab5fd8ef7845"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dd95d9ae-a7ae-43d9-b020-ab5fd8ef7845",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dd95d9ae-a7ae-43d9-b020-ab5fd8ef7845",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "67525c67-186e-46d9-bce2-cf6f5375c850",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "71ea6fd2-a9fe-48d4-9a70-31eb332a9d5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "300a0f41-24dc-4047-9695-a7f5754a2471",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:26:12.105 23:07:27 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90752 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90752 ']' 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90752 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90752 00:26:12.105 killing process with pid 90752 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90752' 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90752 00:26:12.105 23:07:27 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90752 00:26:15.406 23:07:31 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:15.406 23:07:31 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:15.406 23:07:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:15.406 23:07:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.406 23:07:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:15.406 ************************************ 00:26:15.406 START TEST bdev_hello_world 00:26:15.406 ************************************ 00:26:15.406 23:07:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:15.406 [2024-12-09 23:07:31.212793] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:15.406 [2024-12-09 23:07:31.212930] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90824 ] 00:26:15.664 [2024-12-09 23:07:31.391174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.923 [2024-12-09 23:07:31.524563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.490 [2024-12-09 23:07:32.129480] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:16.490 [2024-12-09 23:07:32.129540] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:26:16.490 [2024-12-09 23:07:32.129563] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:16.490 [2024-12-09 23:07:32.130176] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:16.490 [2024-12-09 23:07:32.130337] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:16.490 [2024-12-09 23:07:32.130358] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:16.490 [2024-12-09 23:07:32.130421] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:16.490 00:26:16.490 [2024-12-09 23:07:32.130444] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:18.425 00:26:18.425 real 0m2.741s 00:26:18.425 user 0m2.335s 00:26:18.425 sys 0m0.275s 00:26:18.425 23:07:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.425 23:07:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:18.425 ************************************ 00:26:18.425 END TEST bdev_hello_world 00:26:18.425 ************************************ 00:26:18.425 23:07:33 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:26:18.425 23:07:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:18.425 23:07:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.425 23:07:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:18.425 ************************************ 00:26:18.425 START TEST bdev_bounds 00:26:18.425 ************************************ 00:26:18.425 Process bdevio pid: 90877 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90877 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90877' 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90877 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90877 ']' 00:26:18.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.425 23:07:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:18.425 [2024-12-09 23:07:33.973068] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:18.426 [2024-12-09 23:07:33.973310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90877 ] 00:26:18.426 [2024-12-09 23:07:34.137886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:18.684 [2024-12-09 23:07:34.281926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.684 [2024-12-09 23:07:34.282031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.684 [2024-12-09 23:07:34.282038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.255 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.255 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:26:19.255 23:07:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:19.514 I/O targets: 00:26:19.514 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:26:19.514 00:26:19.514 00:26:19.514 CUnit - A unit testing framework for C - Version 2.1-3 00:26:19.514 http://cunit.sourceforge.net/ 00:26:19.514 00:26:19.514 00:26:19.514 Suite: bdevio tests on: raid5f 00:26:19.514 Test: blockdev write read block ...passed 00:26:19.514 Test: blockdev write zeroes read block ...passed 00:26:19.514 Test: blockdev write zeroes read no split ...passed 00:26:19.773 Test: blockdev write zeroes read split ...passed 00:26:19.773 Test: blockdev write zeroes read split partial ...passed 00:26:19.773 Test: blockdev reset ...passed 00:26:19.773 Test: blockdev write read 8 blocks ...passed 00:26:19.773 Test: blockdev write read size > 128k ...passed 00:26:19.773 Test: blockdev write read invalid size ...passed 00:26:19.773 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:19.773 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:19.773 Test: blockdev write read max offset ...passed 00:26:19.773 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:19.773 Test: blockdev writev readv 8 blocks ...passed 00:26:19.773 Test: blockdev writev readv 30 x 1block ...passed 00:26:19.773 Test: blockdev writev readv block ...passed 00:26:19.773 Test: blockdev writev readv size > 128k ...passed 00:26:19.773 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:19.773 Test: blockdev comparev and writev ...passed 00:26:19.773 Test: blockdev nvme passthru rw ...passed 00:26:19.773 Test: blockdev nvme passthru vendor specific ...passed 00:26:19.773 Test: blockdev nvme admin passthru ...passed 00:26:19.773 Test: blockdev copy ...passed 00:26:19.773 00:26:19.773 Run Summary: Type Total Ran Passed Failed Inactive 00:26:19.773 suites 1 1 n/a 0 0 00:26:19.773 tests 23 23 23 0 0 00:26:19.773 asserts 130 130 130 0 n/a 00:26:19.773 00:26:19.773 Elapsed time = 0.748 seconds 00:26:19.773 0 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90877 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90877 ']' 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90877 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90877 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90877' 00:26:19.773 killing process with pid 90877 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90877 00:26:19.773 23:07:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90877 00:26:21.671 23:07:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:21.671 00:26:21.671 real 0m3.362s 00:26:21.671 user 0m8.717s 00:26:21.671 sys 0m0.420s 00:26:21.671 23:07:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.671 23:07:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:21.671 ************************************ 00:26:21.671 END TEST bdev_bounds 00:26:21.671 ************************************ 00:26:21.671 23:07:37 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:21.671 23:07:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:21.671 23:07:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.671 23:07:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.671 ************************************ 00:26:21.671 START TEST bdev_nbd 00:26:21.671 ************************************ 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90941 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90941 /var/tmp/spdk-nbd.sock 00:26:21.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90941 ']' 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.671 23:07:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:21.671 [2024-12-09 23:07:37.462315] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:21.671 [2024-12-09 23:07:37.462591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.929 [2024-12-09 23:07:37.668112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.187 [2024-12-09 23:07:37.807347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:22.753 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:23.011 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:23.269 1+0 records in 00:26:23.269 1+0 records out 00:26:23.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432772 s, 9.5 MB/s 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:23.269 23:07:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:23.526 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:23.526 { 00:26:23.527 "nbd_device": "/dev/nbd0", 00:26:23.527 "bdev_name": "raid5f" 00:26:23.527 } 00:26:23.527 ]' 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:23.527 { 00:26:23.527 "nbd_device": "/dev/nbd0", 00:26:23.527 "bdev_name": "raid5f" 00:26:23.527 } 00:26:23.527 ]' 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:23.527 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:23.786 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:24.044 23:07:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:26:24.301 /dev/nbd0 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:24.301 1+0 records in 00:26:24.301 1+0 records out 00:26:24.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448894 s, 9.1 MB/s 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:24.301 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:24.558 { 00:26:24.558 "nbd_device": "/dev/nbd0", 00:26:24.558 "bdev_name": "raid5f" 00:26:24.558 } 00:26:24.558 ]' 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:24.558 { 00:26:24.558 "nbd_device": "/dev/nbd0", 00:26:24.558 "bdev_name": "raid5f" 00:26:24.558 } 00:26:24.558 ]' 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:24.558 256+0 records in 00:26:24.558 256+0 records out 00:26:24.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00814186 s, 129 MB/s 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:24.558 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:24.816 256+0 records in 00:26:24.816 256+0 records out 00:26:24.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0356459 s, 29.4 MB/s 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:24.816 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:25.074 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:25.332 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:25.332 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:25.332 23:07:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:26:25.332 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:25.590 malloc_lvol_verify 00:26:25.590 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:25.850 b4ba632d-6b10-42b2-994b-8be0c2ce4086 00:26:25.850 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:26.110 84d1bdb8-15c7-4265-85f4-a3993c837681 00:26:26.110 23:07:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:26.368 /dev/nbd0 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:26:26.368 mke2fs 1.47.0 (5-Feb-2023) 00:26:26.368 Discarding device blocks: 0/4096 done 00:26:26.368 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:26.368 00:26:26.368 Allocating group tables: 0/1 done 00:26:26.368 Writing inode tables: 0/1 done 00:26:26.368 Creating journal (1024 blocks): done 00:26:26.368 Writing superblocks and filesystem accounting information: 0/1 done 00:26:26.368 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:26.368 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:26.625 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:26.625 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:26.625 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:26.625 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:26.625 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:26.625 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90941 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90941 ']' 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90941 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90941 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.626 killing process with pid 90941 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90941' 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90941 00:26:26.626 23:07:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90941 00:26:28.523 23:07:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:28.523 00:26:28.523 real 0m6.831s 00:26:28.523 user 0m9.464s 00:26:28.523 sys 0m1.481s 00:26:28.523 23:07:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.523 23:07:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:28.523 ************************************ 00:26:28.523 END TEST bdev_nbd 00:26:28.523 ************************************ 00:26:28.523 23:07:44 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:26:28.523 23:07:44 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:26:28.523 23:07:44 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:26:28.523 23:07:44 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:26:28.523 23:07:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:28.523 23:07:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.523 23:07:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:28.523 ************************************ 00:26:28.523 START TEST bdev_fio 00:26:28.523 ************************************ 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:28.523 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:26:28.523 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:28.524 ************************************ 00:26:28.524 START TEST bdev_fio_rw_verify 00:26:28.524 ************************************ 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:26:28.524 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.781 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:28.781 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:28.781 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:26:28.781 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:28.781 23:07:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:28.781 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:28.781 fio-3.35 00:26:28.781 Starting 1 thread 00:26:41.080 00:26:41.080 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91154: Mon Dec 9 23:07:55 2024 00:26:41.080 read: IOPS=7680, BW=30.0MiB/s (31.5MB/s)(300MiB/10001msec) 00:26:41.080 slat (nsec): min=25234, max=86118, avg=32055.93, stdev=4123.83 00:26:41.080 clat (usec): min=14, max=598, avg=206.73, stdev=77.55 00:26:41.081 lat (usec): min=45, max=648, avg=238.79, stdev=78.49 00:26:41.081 clat percentiles (usec): 00:26:41.081 | 50.000th=[ 208], 99.000th=[ 375], 99.900th=[ 449], 99.990th=[ 490], 00:26:41.081 | 99.999th=[ 603] 00:26:41.081 write: IOPS=8078, BW=31.6MiB/s (33.1MB/s)(312MiB/9887msec); 0 zone resets 00:26:41.081 slat (usec): min=12, max=362, avg=26.57, stdev= 6.78 00:26:41.081 clat (usec): min=86, max=1855, avg=470.88, stdev=68.52 00:26:41.081 lat (usec): min=110, max=2217, avg=497.45, stdev=70.49 00:26:41.081 clat percentiles (usec): 00:26:41.081 | 50.000th=[ 474], 99.000th=[ 668], 99.900th=[ 783], 99.990th=[ 1303], 00:26:41.081 | 99.999th=[ 1860] 00:26:41.081 bw ( KiB/s): min=29016, max=36568, per=98.88%, avg=31953.68, stdev=1930.62, samples=19 00:26:41.081 iops : min= 7254, max= 9142, avg=7988.42, stdev=482.65, samples=19 00:26:41.081 lat (usec) : 20=0.01%, 100=5.31%, 250=27.28%, 500=50.35%, 750=16.97% 00:26:41.081 lat (usec) : 1000=0.08% 00:26:41.081 lat (msec) : 2=0.02% 00:26:41.081 cpu : usr=98.48%, sys=0.60%, ctx=30, majf=0, minf=6820 00:26:41.081 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.081 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.081 issued rwts: total=76808,79877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.081 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:41.081 00:26:41.081 Run status group 0 (all jobs): 00:26:41.081 READ: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=300MiB (315MB), run=10001-10001msec 00:26:41.081 WRITE: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=312MiB (327MB), run=9887-9887msec 00:26:42.028 ----------------------------------------------------- 00:26:42.028 Suppressions used: 00:26:42.028 count bytes template 00:26:42.028 1 7 /usr/src/fio/parse.c 00:26:42.028 677 64992 /usr/src/fio/iolog.c 00:26:42.028 1 8 libtcmalloc_minimal.so 00:26:42.028 1 904 libcrypto.so 00:26:42.028 ----------------------------------------------------- 00:26:42.028 00:26:42.028 00:26:42.028 real 0m13.313s 00:26:42.028 user 0m13.340s 00:26:42.028 sys 0m0.748s 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:42.028 ************************************ 00:26:42.028 END TEST bdev_fio_rw_verify 00:26:42.028 ************************************ 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dd95d9ae-a7ae-43d9-b020-ab5fd8ef7845"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dd95d9ae-a7ae-43d9-b020-ab5fd8ef7845",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dd95d9ae-a7ae-43d9-b020-ab5fd8ef7845",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "67525c67-186e-46d9-bce2-cf6f5375c850",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "71ea6fd2-a9fe-48d4-9a70-31eb332a9d5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "300a0f41-24dc-4047-9695-a7f5754a2471",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:42.028 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:42.028 /home/vagrant/spdk_repo/spdk 00:26:42.029 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:42.029 23:07:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:42.029 00:26:42.029 real 0m13.547s 00:26:42.029 user 0m13.451s 00:26:42.029 sys 0m0.846s 00:26:42.029 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.029 23:07:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:42.029 ************************************ 00:26:42.029 END TEST bdev_fio 00:26:42.029 ************************************ 00:26:42.029 23:07:57 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:42.029 23:07:57 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:42.029 23:07:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:26:42.029 23:07:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.029 23:07:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:42.029 ************************************ 00:26:42.029 START TEST bdev_verify 00:26:42.029 ************************************ 00:26:42.029 23:07:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:42.288 [2024-12-09 23:07:57.910256] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:42.288 [2024-12-09 23:07:57.910621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91318 ] 00:26:42.288 [2024-12-09 23:07:58.096208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:42.547 [2024-12-09 23:07:58.276158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.547 [2024-12-09 23:07:58.276165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.113 Running I/O for 5 seconds... 00:26:45.424 10505.00 IOPS, 41.04 MiB/s [2024-12-09T23:08:02.214Z] 10785.50 IOPS, 42.13 MiB/s [2024-12-09T23:08:03.156Z] 11110.33 IOPS, 43.40 MiB/s [2024-12-09T23:08:04.093Z] 11325.25 IOPS, 44.24 MiB/s [2024-12-09T23:08:04.093Z] 11231.40 IOPS, 43.87 MiB/s 00:26:48.237 Latency(us) 00:26:48.237 [2024-12-09T23:08:04.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.237 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:48.237 Verification LBA range: start 0x0 length 0x2000 00:26:48.237 raid5f : 5.02 5603.53 21.89 0.00 0.00 34296.66 270.09 30678.86 00:26:48.237 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:48.237 Verification LBA range: start 0x2000 length 0x2000 00:26:48.237 raid5f : 5.02 5626.89 21.98 0.00 0.00 34117.74 341.63 30678.86 00:26:48.237 [2024-12-09T23:08:04.093Z] =================================================================================================================== 00:26:48.237 [2024-12-09T23:08:04.093Z] Total : 11230.42 43.87 0.00 0.00 34207.02 270.09 30678.86 00:26:50.136 00:26:50.136 real 0m7.895s 00:26:50.136 user 0m14.440s 00:26:50.136 sys 0m0.320s 00:26:50.136 23:08:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.136 23:08:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:50.136 ************************************ 00:26:50.136 END TEST bdev_verify 00:26:50.136 ************************************ 00:26:50.136 23:08:05 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:50.136 23:08:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:26:50.136 23:08:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.136 23:08:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:50.136 ************************************ 00:26:50.136 START TEST bdev_verify_big_io 00:26:50.136 ************************************ 00:26:50.136 23:08:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:50.136 [2024-12-09 23:08:05.844371] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:50.136 [2024-12-09 23:08:05.844517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91423 ] 00:26:50.394 [2024-12-09 23:08:06.019202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:50.394 [2024-12-09 23:08:06.152261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.394 [2024-12-09 23:08:06.152299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.962 Running I/O for 5 seconds... 00:26:53.327 506.00 IOPS, 31.62 MiB/s [2024-12-09T23:08:10.126Z] 600.50 IOPS, 37.53 MiB/s [2024-12-09T23:08:11.066Z] 592.00 IOPS, 37.00 MiB/s [2024-12-09T23:08:12.002Z] 586.50 IOPS, 36.66 MiB/s [2024-12-09T23:08:12.262Z] 608.80 IOPS, 38.05 MiB/s 00:26:56.406 Latency(us) 00:26:56.406 [2024-12-09T23:08:12.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.406 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:56.406 Verification LBA range: start 0x0 length 0x200 00:26:56.406 raid5f : 5.47 301.55 18.85 0.00 0.00 10403559.32 327.32 468882.89 00:26:56.406 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:56.406 Verification LBA range: start 0x200 length 0x200 00:26:56.406 raid5f : 5.45 302.55 18.91 0.00 0.00 10326083.74 468.63 467051.32 00:26:56.406 [2024-12-09T23:08:12.262Z] =================================================================================================================== 00:26:56.406 [2024-12-09T23:08:12.262Z] Total : 604.10 37.76 0.00 0.00 10364821.53 327.32 468882.89 00:26:58.311 00:26:58.311 real 0m8.203s 00:26:58.311 user 0m15.152s 00:26:58.311 sys 0m0.284s 00:26:58.311 23:08:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.311 23:08:13 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:58.311 ************************************ 00:26:58.311 END TEST bdev_verify_big_io 00:26:58.311 ************************************ 00:26:58.311 23:08:13 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.311 23:08:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:58.311 23:08:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.311 23:08:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:58.311 ************************************ 00:26:58.311 START TEST bdev_write_zeroes 00:26:58.311 ************************************ 00:26:58.311 23:08:13 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.311 [2024-12-09 23:08:14.094298] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:26:58.311 [2024-12-09 23:08:14.094482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91526 ] 00:26:58.578 [2024-12-09 23:08:14.268238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.838 [2024-12-09 23:08:14.439025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.405 Running I/O for 1 seconds... 00:27:00.352 18087.00 IOPS, 70.65 MiB/s 00:27:00.352 Latency(us) 00:27:00.352 [2024-12-09T23:08:16.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.352 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:00.352 raid5f : 1.01 18075.76 70.61 0.00 0.00 7052.07 2132.07 9444.05 00:27:00.352 [2024-12-09T23:08:16.208Z] =================================================================================================================== 00:27:00.352 [2024-12-09T23:08:16.208Z] Total : 18075.76 70.61 0.00 0.00 7052.07 2132.07 9444.05 00:27:02.287 00:27:02.287 real 0m3.782s 00:27:02.287 user 0m3.336s 00:27:02.287 sys 0m0.308s 00:27:02.287 23:08:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.287 23:08:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:02.287 ************************************ 00:27:02.287 END TEST bdev_write_zeroes 00:27:02.287 ************************************ 00:27:02.287 23:08:17 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:02.287 23:08:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:02.287 23:08:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.287 23:08:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:02.287 ************************************ 00:27:02.287 START TEST bdev_json_nonenclosed 00:27:02.287 ************************************ 00:27:02.287 23:08:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:02.287 [2024-12-09 23:08:17.935119] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:27:02.287 [2024-12-09 23:08:17.935304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91587 ] 00:27:02.287 [2024-12-09 23:08:18.108500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.546 [2024-12-09 23:08:18.259553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.546 [2024-12-09 23:08:18.259660] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:02.546 [2024-12-09 23:08:18.259692] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:02.546 [2024-12-09 23:08:18.259704] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:02.804 00:27:02.804 real 0m0.734s 00:27:02.804 user 0m0.500s 00:27:02.804 sys 0m0.127s 00:27:02.804 23:08:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.804 23:08:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:02.804 ************************************ 00:27:02.804 END TEST bdev_json_nonenclosed 00:27:02.804 ************************************ 00:27:02.804 23:08:18 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:02.804 23:08:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:02.804 23:08:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.804 23:08:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:02.804 ************************************ 00:27:02.804 START TEST bdev_json_nonarray 00:27:02.804 ************************************ 00:27:02.804 23:08:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:03.062 [2024-12-09 23:08:18.720069] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization... 00:27:03.062 [2024-12-09 23:08:18.720274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91613 ] 00:27:03.320 [2024-12-09 23:08:18.922061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.320 [2024-12-09 23:08:19.060754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.320 [2024-12-09 23:08:19.060861] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:03.320 [2024-12-09 23:08:19.060883] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:03.320 [2024-12-09 23:08:19.060904] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:03.578 00:27:03.578 real 0m0.760s 00:27:03.578 user 0m0.499s 00:27:03.578 sys 0m0.154s 00:27:03.578 23:08:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.578 23:08:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:03.578 ************************************ 00:27:03.579 END TEST bdev_json_nonarray 00:27:03.579 ************************************ 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:27:03.579 23:08:19 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:27:03.579 00:27:03.579 real 0m53.500s 00:27:03.579 user 1m13.128s 00:27:03.579 sys 0m5.251s 00:27:03.579 23:08:19 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.579 23:08:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:03.579 ************************************ 00:27:03.579 END TEST blockdev_raid5f 00:27:03.579 ************************************ 00:27:03.839 23:08:19 -- spdk/autotest.sh@194 -- # uname -s 00:27:03.839 23:08:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:27:03.839 23:08:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:03.839 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:27:03.839 23:08:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:03.839 23:08:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:27:03.839 23:08:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:27:03.839 23:08:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:27:03.839 23:08:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:03.839 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:27:03.839 23:08:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:27:03.839 23:08:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:27:03.839 23:08:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:27:03.839 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.222 INFO: APP EXITING 00:27:05.222 INFO: killing all VMs 00:27:05.222 INFO: killing vhost app 00:27:05.222 INFO: EXIT DONE 00:27:05.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:05.801 Waiting for block devices as requested 00:27:05.801 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:05.801 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:06.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:06.630 Cleaning 00:27:06.630 Removing: /var/run/dpdk/spdk0/config 00:27:06.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:06.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:06.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:06.630 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:06.630 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:06.630 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:06.630 Removing: /dev/shm/spdk_tgt_trace.pid57176 00:27:06.630 Removing: /var/run/dpdk/spdk0 00:27:06.630 Removing: /var/run/dpdk/spdk_pid56941 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57176 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57405 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57520 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57575 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57704 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57728 00:27:06.630 Removing: /var/run/dpdk/spdk_pid57943 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58061 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58175 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58307 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58426 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58471 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58502 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58578 00:27:06.630 Removing: /var/run/dpdk/spdk_pid58708 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59166 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59241 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59315 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59331 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59490 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59512 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59682 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59698 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59773 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59802 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59872 00:27:06.630 Removing: /var/run/dpdk/spdk_pid59895 00:27:06.630 Removing: /var/run/dpdk/spdk_pid60107 00:27:06.630 Removing: /var/run/dpdk/spdk_pid60143 00:27:06.630 Removing: /var/run/dpdk/spdk_pid60232 00:27:06.630 Removing: /var/run/dpdk/spdk_pid61630 00:27:06.630 Removing: /var/run/dpdk/spdk_pid61841 00:27:06.630 Removing: /var/run/dpdk/spdk_pid61987 00:27:06.630 Removing: /var/run/dpdk/spdk_pid62636 00:27:06.630 Removing: /var/run/dpdk/spdk_pid62853 00:27:06.630 Removing: /var/run/dpdk/spdk_pid63004 00:27:06.630 Removing: /var/run/dpdk/spdk_pid63655 00:27:06.630 Removing: /var/run/dpdk/spdk_pid63994 00:27:06.630 Removing: /var/run/dpdk/spdk_pid64140 00:27:06.630 Removing: /var/run/dpdk/spdk_pid65547 00:27:06.630 Removing: /var/run/dpdk/spdk_pid65808 00:27:06.630 Removing: /var/run/dpdk/spdk_pid65953 00:27:06.630 Removing: /var/run/dpdk/spdk_pid67366 00:27:06.630 Removing: /var/run/dpdk/spdk_pid67629 00:27:06.630 Removing: /var/run/dpdk/spdk_pid67777 00:27:06.630 Removing: /var/run/dpdk/spdk_pid69181 00:27:06.630 Removing: /var/run/dpdk/spdk_pid69640 00:27:06.891 Removing: /var/run/dpdk/spdk_pid69787 00:27:06.891 Removing: /var/run/dpdk/spdk_pid71293 00:27:06.891 Removing: /var/run/dpdk/spdk_pid71561 00:27:06.891 Removing: /var/run/dpdk/spdk_pid71709 00:27:06.891 Removing: /var/run/dpdk/spdk_pid73218 00:27:06.891 Removing: /var/run/dpdk/spdk_pid73488 00:27:06.891 Removing: /var/run/dpdk/spdk_pid73634 00:27:06.891 Removing: /var/run/dpdk/spdk_pid75142 00:27:06.891 Removing: /var/run/dpdk/spdk_pid75633 00:27:06.891 Removing: /var/run/dpdk/spdk_pid75780 00:27:06.891 Removing: /var/run/dpdk/spdk_pid75924 00:27:06.891 Removing: /var/run/dpdk/spdk_pid76353 00:27:06.891 Removing: /var/run/dpdk/spdk_pid77094 00:27:06.891 Removing: /var/run/dpdk/spdk_pid77470 00:27:06.891 Removing: /var/run/dpdk/spdk_pid78170 00:27:06.891 Removing: /var/run/dpdk/spdk_pid78628 00:27:06.891 Removing: /var/run/dpdk/spdk_pid79394 00:27:06.891 Removing: /var/run/dpdk/spdk_pid79823 00:27:06.891 Removing: /var/run/dpdk/spdk_pid81808 00:27:06.891 Removing: /var/run/dpdk/spdk_pid82256 00:27:06.891 Removing: /var/run/dpdk/spdk_pid82696 00:27:06.891 Removing: /var/run/dpdk/spdk_pid84815 00:27:06.891 Removing: /var/run/dpdk/spdk_pid85303 00:27:06.891 Removing: /var/run/dpdk/spdk_pid85829 00:27:06.891 Removing: /var/run/dpdk/spdk_pid86894 00:27:06.891 Removing: /var/run/dpdk/spdk_pid87224 00:27:06.891 Removing: /var/run/dpdk/spdk_pid88175 00:27:06.891 Removing: /var/run/dpdk/spdk_pid88503 00:27:06.891 Removing: /var/run/dpdk/spdk_pid89456 00:27:06.891 Removing: /var/run/dpdk/spdk_pid89783 00:27:06.891 Removing: /var/run/dpdk/spdk_pid90470 00:27:06.891 Removing: /var/run/dpdk/spdk_pid90752 00:27:06.891 Removing: /var/run/dpdk/spdk_pid90824 00:27:06.891 Removing: /var/run/dpdk/spdk_pid90877 00:27:06.891 Removing: /var/run/dpdk/spdk_pid91139 00:27:06.891 Removing: /var/run/dpdk/spdk_pid91318 00:27:06.891 Removing: /var/run/dpdk/spdk_pid91423 00:27:06.891 Removing: /var/run/dpdk/spdk_pid91526 00:27:06.891 Removing: /var/run/dpdk/spdk_pid91587 00:27:06.891 Removing: /var/run/dpdk/spdk_pid91613 00:27:06.891 Clean 00:27:06.891 23:08:22 -- common/autotest_common.sh@1453 -- # return 0 00:27:06.891 23:08:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:27:06.891 23:08:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.891 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:27:07.150 23:08:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:27:07.150 23:08:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.150 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:27:07.150 23:08:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:07.150 23:08:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:07.150 23:08:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:07.150 23:08:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:27:07.150 23:08:22 -- spdk/autotest.sh@398 -- # hostname 00:27:07.150 23:08:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:07.409 geninfo: WARNING: invalid characters removed from testname! 00:27:34.101 23:08:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:36.023 23:08:51 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.556 23:08:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.099 23:08:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.014 23:08:58 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.298 23:09:01 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.836 23:09:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:48.836 23:09:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:48.836 23:09:04 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:48.836 23:09:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:48.836 23:09:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:48.836 23:09:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:48.836 + [[ -n 5432 ]] 00:27:48.836 + sudo kill 5432 00:27:48.845 [Pipeline] } 00:27:48.862 [Pipeline] // timeout 00:27:48.867 [Pipeline] } 00:27:48.887 [Pipeline] // stage 00:27:48.897 [Pipeline] } 00:27:48.928 [Pipeline] // catchError 00:27:48.948 [Pipeline] stage 00:27:48.951 [Pipeline] { (Stop VM) 00:27:48.964 [Pipeline] sh 00:27:49.240 + vagrant halt 00:27:52.536 ==> default: Halting domain... 00:28:00.660 [Pipeline] sh 00:28:00.942 + vagrant destroy -f 00:28:04.228 ==> default: Removing domain... 00:28:04.240 [Pipeline] sh 00:28:04.525 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:28:04.535 [Pipeline] } 00:28:04.550 [Pipeline] // stage 00:28:04.555 [Pipeline] } 00:28:04.569 [Pipeline] // dir 00:28:04.574 [Pipeline] } 00:28:04.589 [Pipeline] // wrap 00:28:04.595 [Pipeline] } 00:28:04.609 [Pipeline] // catchError 00:28:04.619 [Pipeline] stage 00:28:04.621 [Pipeline] { (Epilogue) 00:28:04.635 [Pipeline] sh 00:28:04.918 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:11.485 [Pipeline] catchError 00:28:11.488 [Pipeline] { 00:28:11.504 [Pipeline] sh 00:28:11.785 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:11.785 Artifacts sizes are good 00:28:11.794 [Pipeline] } 00:28:11.808 [Pipeline] // catchError 00:28:11.818 [Pipeline] archiveArtifacts 00:28:11.825 Archiving artifacts 00:28:11.948 [Pipeline] cleanWs 00:28:11.970 [WS-CLEANUP] Deleting project workspace... 00:28:11.970 [WS-CLEANUP] Deferred wipeout is used... 00:28:11.986 [WS-CLEANUP] done 00:28:11.987 [Pipeline] } 00:28:12.000 [Pipeline] // stage 00:28:12.004 [Pipeline] } 00:28:12.016 [Pipeline] // node 00:28:12.020 [Pipeline] End of Pipeline 00:28:12.055 Finished: SUCCESS